Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capture-video flag doesn't work with ppo continuous #503

Closed
RishiMalhotra920 opened this issue Mar 16, 2025 · 4 comments
Closed

Capture-video flag doesn't work with ppo continuous #503

RishiMalhotra920 opened this issue Mar 16, 2025 · 4 comments

Comments

@RishiMalhotra920
Copy link

I installed packages using

pip install moviepy==1.0.3 pygame==2.1.0 rich<12.0 tenacity==8.2.2 gymnasium==0.28.1 torch numpy

but I find that the capture-video flag throws errors. I have pasted the command I run and the error below.

My training command.

python -m cleanrl.ppo_continuous_action --env-id Pendulum-v1 --exp-name ppo --seed 1 --track --wandb-project-name cleanRL --gamma 0.99 --vf-coef 0.5 --ent-coef 0.01 --norm-adv --num-envs 4  --clip-coef 0.2 --num-steps 2048 --clip-vloss --gae-lambda 0.95 --learning-rate 3e-4 --anneal-lr --max-grad-norm 0.5 --update-epochs 20 --num-minibatches 4 --total-timesteps 5000000 --torch-deterministic --exp-name long-experiment-low-lr-custom-annealing --capture-video
Traceback (most recent call last):
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/rishimalhotra/projects/lm_from_scratch/rl/cleanrl/cleanrl/ppo_continuous_action.py", line 223, in <module>
    next_obs, reward, terminations, truncations, infos = envs.step(action.cpu().numpy())
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/vector/vector_env.py", line 203, in step
    return self.step_wait()
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/vector/sync_vector_env.py", line 149, in step_wait
    ) = env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/core.py", line 502, in step
    observation, reward, terminated, truncated, info = self.env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/normalize.py", line 133, in step
    obs, rews, terminateds, truncateds, infos = self.env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/core.py", line 469, in step
    observation, reward, terminated, truncated, info = self.env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/normalize.py", line 76, in step
    obs, rews, terminateds, truncateds, infos = self.env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/core.py", line 538, in step
    return self.env.step(self.action(action))
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/record_episode_statistics.py", line 89, in step
    ) = self.env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/core.py", line 469, in step
    observation, reward, terminated, truncated, info = self.env.step(action)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/record_video.py", line 172, in step
    self.video_recorder.capture_frame()
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/monitoring/video_recorder.py", line 114, in capture_frame
    frame = self.env.render()
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/core.py", line 418, in render
    return self.env.render()
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/order_enforcing.py", line 70, in render
    return self.env.render(*args, **kwargs)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/wrappers/env_checker.py", line 65, in render
    return self.env.render(*args, **kwargs)
  File "/opt/anaconda3/envs/cleanrl_env2/lib/python3.10/site-packages/gymnasium/envs/classic_control/pendulum.py", line 237, in render
    scale_img = pygame.transform.smoothscale(
TypeError: size must be two numbers
@RishiMalhotra920
Copy link
Author

RishiMalhotra920 commented Mar 16, 2025

Adding this to the top of my script fixed it

original_smoothscale = pygame.transform.smoothscale


def safe_smoothscale(surface, size, *args, **kwargs):
    if not (isinstance(size, tuple) and len(size) == 2):
        # Fix the size parameter
        size = (640, 480)  # Default size
    try:
        return original_smoothscale(surface, size, *args, **kwargs)
    except TypeError:
        return surface


pygame.transform.smoothscale = safe_smoothscale

@pseudo-rnd-thoughts
Copy link
Collaborator

My guess is that this is a problem with NumPy 2.0, could you try NumPy 1.24

@RishiMalhotra920
Copy link
Author

Yeah that did it! any plans to upgrade the gymnasium and numpy versions?

@pseudo-rnd-thoughts
Copy link
Collaborator

#502

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants