Skip to content

TypeError: add(): argument 'alpha' must be Number, not NoneType #35 #3964

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
reubenwenisch opened this issue Feb 15, 2022 · 13 comments
Open

Comments

@reubenwenisch
Copy link

reubenwenisch commented Feb 15, 2022

I was trying to run the repo DynamicHead and trying to run the model with a custom dataset and have the following error. In the repo the launch function is called and I get these errors.

The command I run was

DETECTRON2_DATASETS=$DATASET python train_net.py --config configs/dyhead_r50_retina_fpn_1x.yaml --num-gpus 8

Logs

Traceback (most recent call last):
  File "train_net_custom.py", line 222, in <module>
    launch(
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/detectron2/engine/launch.py", line 82, in launch
    main_func(*args)
  File "train_net_custom.py", line 216, in main
    return trainer.train()
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 484, in train
    super().train(self.start_iter, self.max_iter)
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 149, in train
    self.run_step()
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 494, in run_step
    self._trainer.run_step()
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 294, in run_step
    self.optimizer.step()
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/torch/optim/optimizer.py", line 89, in wrapper
    return func(*args, **kwargs)
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/torch/optim/sgd.py", line 110, in step
    F.sgd(params_with_grad,
  File "/home/xxx/anaconda3/envs/detectron/lib/python3.8/site-packages/torch/optim/_functional.py", line 160, in sgd
    d_p = d_p.add(param, alpha=weight_decay)
TypeError: add(): argument 'alpha' must be Number, not NoneType

Detectron2 version: 0.6

Expected behaviour is the training loop runs without error.

@github-actions
Copy link

You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template.
The following information is missing: "Instructions To Reproduce the Issue and Full Logs"; "Your Environment";

@github-actions github-actions bot added needs-more-info More info is needed to complete the issue and removed needs-more-info More info is needed to complete the issue labels Feb 15, 2022
@ppwwyyxx ppwwyyxx added the needs-more-info More info is needed to complete the issue label Feb 15, 2022
@dedoogong
Copy link

I got the same error!

@reubenwenisch
Copy link
Author

@dedoogong This error had come with using detectron2 latest version I moved it to previous version and the code started working. Hope this helps.

@github-actions github-actions bot removed the needs-more-info More info is needed to complete the issue label Feb 23, 2022
@awadsb1
Copy link

awadsb1 commented Jul 25, 2022

@reubenwenisch which version of Detectron2 did you move to? Having the same issue

@Roywangj
Copy link

I have met the same problem, i find the setting "weight_decay" don't suppport numbers like 1e-4, when i replace 1e-4 to 0.0001, it fixed.

@Yxt1212
Copy link

Yxt1212 commented Sep 20, 2022

Same error too

@Roywangj
Copy link

Roywangj commented Sep 20, 2022 via email

@frostinassiky
Copy link

I have met the same problem, i find the setting "weight_decay" don't suppport numbers like 1e-4, when i replace 1e-4 to 0.0001, it fixed.

Great! The main reason is that the optimizer in this version does not handle None of weight_decay. I set it from None to 0 and my code is running smoothly.

@Roywangj
Copy link

Roywangj commented Oct 9, 2022 via email

@yuhao20
Copy link

yuhao20 commented Jul 29, 2023

I have met the same problem, i find the setting "weight_decay" don't suppport numbers like 1e-4, when i replace 1e-4 to 0.0001, it fixed.

Great! The main reason is that the optimizer in this version does not handle None of weight_decay. I set it from None to 0 and my code is running smoothly.

excuse me, could you tell me the place where we could change weight_decay? the .yaml?

@EGO-False-Sleep
Copy link

I have met the same problem, i find the setting "weight_decay" don't suppport numbers like 1e-4, when i replace 1e-4 to 0.0001, it fixed.

您好,我最近也在学习这篇开源项目,遇到了这个问题,请问您是在何处修改的weight_decay,我找到了两个可能的位置,分别是defrcn/config/defaults.py 34行 值为5e-5以及detectron2\config\defaults.py 522行值为0.0001 ,分别对应着_CC以及_C,但是在defrcn/config/config.py的get_cfg函数中,实际返回值貌似为_CC,如果您有时间,请不吝赐教,这对我非常重要,谢谢,期待您的答复

@Roywangj
Copy link

Roywangj commented Mar 8, 2025

你好 这是很久以前弄的了,我记得只是从 1e-4 改成 0.0001 就可以了,具体在哪里我已经不记得了,你可以都试一下,祝好

@EGO-False-Sleep
Copy link

你好 这是很久以前弄的了,我记得只是从 1e-4 改成 0.0001 就可以了,具体在哪里我已经不记得了,你可以都试一下,祝好

感谢您的回信,我研究了很久,最后找到了一个修改方案,但是后面又陆续出现了别的报错信息,我已经打算放弃这个项目了,关于小样本目标检测任务如果您有一些好的开源项目推荐,我将万分感谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants