Tutorials on how to create custom Gymnasium-compatible Reinforcement Learning environments using the Gymnasium Library, formerly OpenAI’s Gym library. Each tutorial has a companion video explanation and code walkthrough from my YouTube channel @johnnycode. If the code and video helped you, please consider:
This is a very basic tutorial showing end-to-end how to create a custom Gymnasium-compatible Reinforcement Learning environment. The tutorial is divided into three parts:
- Model your problem.
- Convert your problem into a Gymnasium-compatible environment.
- Train your custom environment in two ways; using Q-Learning and using the Stable Baselines3 library.
- v0_warehouse_robot*.py
In part 1, we created a very simple custom Reinforcement Learning environment that is compatible with Farama Gymnasium (formerly OpenAI Gym). In this tutorial, we'll do a minor upgrade and visualize our environment using Pygame.
- v0_warehouse_robot*.py
The Box space type is used in many Gymnasium environments and you'll likely need it for your custom environment. The Box action space can be used to validate agent actions or generate random actions. The Box observation space can be used to validate the environment's state. This video explains and demos how to create boxes of different sizes/shapes, lower (low) and upper (high) boundaries, and as int/float data types.
For more Reinforcement Learning and Deep Reinforcement Learning tutorials, check out my Gym Solutions repository.