Skip to content

Conversation

@dxoigmn
Copy link
Contributor

@dxoigmn dxoigmn commented May 5, 2023

What does this PR do?

I think it's worth thinking about whether we should merge this PR. Note that AttackInEvalMode will be wrong in newer versions of PL since it puts the model into train mode after on_train_start is called: https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#hooks.

That said, I do not like using eval mode since many modules branch on self.training. I think a better option is to replace batch norm layers with frozen batch norm and remove dropout layers since that is the semantics one actually wants. eval mode is just abused to do that.

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • Test A
  • Test B

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

@dxoigmn dxoigmn mentioned this pull request Jun 2, 2023
15 tasks
Base automatically changed from adversary_as_lightningmodule to main June 2, 2023 19:33
@dxoigmn dxoigmn requested a review from mzweilin June 8, 2023 14:27
Copy link
Contributor

@mzweilin mzweilin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the right direction (not relying on .eval). I wonder if we can make this automated without specific configuration.

@dxoigmn dxoigmn mentioned this pull request Jun 9, 2023
17 tasks
@dxoigmn
Copy link
Contributor Author

dxoigmn commented Jun 12, 2023

I should note that I'm not sure this works in multi-gpu mode.

@dxoigmn
Copy link
Contributor Author

dxoigmn commented Jun 12, 2023

I should note that I'm not sure this works in multi-gpu mode.

This does work but one must beware that BatchNorm modules get turned into SyncBatchNorm when using DDP:

sync_batchnorm: True

@@ -1,2 +1,11 @@
attack_in_eval_mode:
_target_: mart.callbacks.AttackInEvalMode
module_classes: ???
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add support for module_names?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants