Skip to content

OmniSafe is a comprehensive and reliable benchmark for safe reinforcement learning.

License

Notifications You must be signed in to change notification settings

1Asan/omnisafe

 
 

Repository files navigation

Organization PyPI Documentation Status Downloads GitHub Repo Stars codestyle License

Documentation | Implemented Algorithms | Installation | Getting Started | License

OmniSafe

OmniSafe is a comprehensive and reliable benchmark for safe reinforcement learning, covering a multitude of SafeRL domains and delivering a new suite of testing environments.

The simulation environment around OmniSafe and a series of reliable algorithm implementations will help the SafeRL research community easier to replicate and improve the excellent work already done while also helping to facilitate the validation of new ideas and new algorithms.


Table of Contents


Implemented Algorithms

The supported interface algorithms currently include:

Newly Published in 2022

List of Algorithms

On-Policy Safe

Off-Policy Safe

Model-Based Safe

Offline Safe

Others


Installation

Prerequisites

OmniSafe requires Python 3.8+ and PyTorch 1.10+.

Install from source

git clone https://github.com/PKU-MARL/omnisafe
cd omnisafe
conda create -n omnisafe python=3.8
conda activate omnisafe

# Install omnisafe
pip install -e .

Examples

cd examples
python train_policy.py --env-id SafetyPointGoal1-v0 --algo PPOLag --parallel 1

algo:

Type Name
Base-On-Policy PolicyGradient, PPO
NaturalPG, TRPO
Base-Off-Policy DDPG, TD3, SAC
Naive Lagrange RCPO, PPOLag, TRPOLag
DDPGLag, TD3Lag, SACLag
PID Lagrange CPPOPid, TRPOPid
First Order FOCOPS, CUP
Second Order SDDPG, CPO, PCPO
Saute RL PPOSaute, PPOLagSaute
Simmer RL PPOSimmerQ, PPOSimmerPid
PPOLagSimmerQ, PPOLagSimmerPid
EarlyTerminated PPOEarlyTerminated
PPOLagEarlyTerminated
Model-Based CAP, MBPPOLag, SafeLOOP

env-id: Environment id in Safety Gymnasium, here a list of envs that safety-gymnasium supports.

Category Task Agent Example
Safe Navigation Goal[012] Point, Car, Racecar, Ant SafetyPointGoal1-v0
Button[012]
Push[012]
Circle[012]
Safe Velocity Velocity HalfCheetah, Hopper, Swimmer, Walker2d, Ant, Humanoid SafetyHumanoidVelocity-v4

More information about environments, please refer to Safety Gymnasium

parallel: Number of parallels


Getting Started

1. Run Agent from preset yaml file

import omnisafe

env = 'SafetyPointGoal1-v0'

agent = omnisafe.Agent('PPOLag', env)
agent.learn()

2. Run Agent from custom config dict

import omnisafe

env = 'SafetyPointGoal1-v0'

custom_dict = {'epochs': 1, 'data_dir': './runs'}
agent = omnisafe.Agent('PPOLag', env, custom_cfgs=custom_dict)
agent.learn()

3. Run Agent from custom terminal config

cd examples
python train_policy.py --env-id SafetyPointGoal1-v0 --algo PPOLag --parallel 1

4. Evalutate Saved Policy

import os

import omnisafe


# Just fill your experiment's log directory in here.
# Such as: ~/omnisafe/runs/SafetyPointGoal1-v0/CPO/seed-000-2022-12-25_14-45-05
LOG_DIR = ''

evaluator = omnisafe.Evaluator()
for item in os.scandir(os.path.join(LOG_DIR, 'torch_save')):
    if item.is_file() and item.name.split('.')[-1] == 'pt':
        evaluator.load_saved_model(save_dir=LOG_DIR, model_name=item.name)
        evaluator.render(num_episode=10, camera_name='track', width=256, height=256)

The OmniSafe Team

OmniSafe is currently maintained by Borong Zhang, Jiayi Zhou, JTao Dai, Weidong Huang, Ruiyang Sun ,Xuehai Pan, Jiamg Ji and under the instruction of Prof. Yaodong Yang. If you have any question in the process of using omnisafe, don't hesitate to ask your question in the GitHub issue page, we will reply you in 2-3 working days.

License

OmniSafe is released under Apache License 2.0.

About

OmniSafe is a comprehensive and reliable benchmark for safe reinforcement learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • Makefile 0.6%