-
Notifications
You must be signed in to change notification settings - Fork 0
Add callback that freezes specified module #141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the right direction (not relying on .eval). I wonder if we can make this automated without specific configuration.
|
I should note that I'm not sure this works in multi-gpu mode. |
This does work but one must beware that MART/mart/configs/trainer/ddp.yaml Line 8 in ed89c72
|
| @@ -1,2 +1,11 @@ | |||
| attack_in_eval_mode: | |||
| _target_: mart.callbacks.AttackInEvalMode | |||
| module_classes: ??? | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add support for module_names?
What does this PR do?
I think it's worth thinking about whether we should merge this PR. Note that
AttackInEvalModewill be wrong in newer versions of PL since it puts the model intotrainmode afteron_train_startis called: https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#hooks.That said, I do not like using
evalmode since many modules branch onself.training. I think a better option is to replace batch norm layers with frozen batch norm and remove dropout layers since that is the semantics one actually wants.evalmode is just abused to do that.Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Before submitting
pre-commit run -acommand without errorsDid you have fun?
Make sure you had fun coding 🙃