Train Custom Robot Dog with RL Locomotio #3789
Unanswered
GRINGOLOCO7
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Summary
I’m currently facing challenges implementing reinforcement learning (RL) for my custom robot dog, Botzo, using Isaac Lab. My goal is to teach Botzo to walk using the locomotion task example from:
/source/isaaclab_task/isaaclab_task/manager_based/locomotion.Despite following the example structure, my robot fails to learn stable locomotion, and I’m looking for guidance or feedback from anyone familiar with Isaac Lab or custom robot RL training.
Project Links
Botzo repository/Documentation: IERoboticsAILab/Botzo
URDF file: Botzo URDF description
Robot asset configuration: botzo.py
Task configuration (adapted from A1): Botzo locomotion config
Problem Description
I duplicated the A1 robot dog locomotion configuration and adapted it for Botzo. However, during training, the results are erratic:
Reward: Starts very low and stays around zero.
Mean episode length: Starts high but rapidly decreases to near zero.
Behavior: Upon spawning, the robot moves its legs uncontrollably, immediately making ground contact and triggering a reset.
Here’s the command I’m using to train:
Example training output video (step 134000):
rl-video-step-134000.mp4
Troubleshooting Attempts
Adjusted motor configuration between ImplicitActuatorCfg and DCMotorCfg — observed drastic behavioral changes.
Tuned physical parameters (stiffness, damping, etc.) without achieving stable results.
Verified observation space:
Currently using example code that includes foot contact sensors, even though Botzo lacks them.
Questions
Should I start from scratch with a custom locomotion task instead of adapting the existing A1 configuration? Maybe directly create a new task, and define all obseravtions, rewards, actions from scratch making sure about input and output (angles for each robot servo)?
Are there recommended strategies or parameters for adapting RL training to a new robot with a different URDF and actuator model?
Could anyone review the provided configuration and share insights or suggestions to improve the training setup?
Additional Notes
Any feedback, advice, or example configurations would be greatly appreciated. I’m eager to get Botzo walking using the predefined Isaac Lab RL scripts and would love to learn best practices for adapting them to a new robot.
I had tryed to follow this tutorial, but I dont see why for this robot the learning process works, but for my botzo it doesn't.
Thank you so much for your time and help!
Gregorio
Beta Was this translation helpful? Give feedback.
All reactions