A flexible framework for efficiently using RLGym v2 to train models.
- Full support for all generics of the RLGym v2 API
- Full support for all functionality of RLGym v2 across multiple environments
- Fast parallelization of environments using Rust and shared memory
- Support for metrics gathering from environments
- Detailed checkpointing system
- File-based configuration
- Provided optimized PPO implementation
- Allows multiple learning algorithms to provide actions for agents within an environment
- Multi-platform (Windows, Linux)
- install RLGym via
pip install rlgym
. If you're here for Rocket League, you can usepip install rlgym[rl-rlviser]
instead to get the RLGym API as well as the Rocket League / Sim submodules and rlviser support. - If you would like to use a GPU install PyTorch with CUDA
- Install this project via
pip install rlgym-learn
- Install rlgym-learn-algos via
pip install rlgym-learn-algos
- If pip installing fails at first, install Rust by following the instructions here
See the RLGym website for complete documentation and demonstration of functionality [COMING SOON]. For now, you can take a look at quick_start_guide.py
and speed_test.py
to get a sense of what's going on.
This project was built using Matthew Allen's wonderful RLGym-PPO as a starting point. Although this project has grown to share almost no code with its predecessor, I couldn't have done this without his support in talking through the design of abstractions and without RLGym-PPO to reference.
All of his files which remain similar have been refactored out to rlgym-learn-algos, although there is still util/KBHit.py contributed by Ian Cunnyngham which comes from RLGym-ppo.
This framework is designed to be usable in every situation you might use the RLGym API in. However, there are a couple assumptions on the usage of RLGym which are baked into the functionality of this framework. These are pretty niche, but are listed below just in case:
- The AgentID hash must fit into a signed 64 bit integer.
- The obs space type and action space type should not change after the associated configuration objects' associated get_x_type functions have been called, and they should be the same across all agents and all envs.