Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adversary as pl.LightningModule #103

Merged
merged 203 commits into from
Jun 2, 2023
Merged

Adversary as pl.LightningModule #103

merged 203 commits into from
Jun 2, 2023

Conversation

dxoigmn
Copy link
Contributor

@dxoigmn dxoigmn commented Mar 14, 2023

What does this PR do?

Right now we treat adversaries as special things with their own loops and callbacks, when really we should just treat them like LightningModules. Doing so means that we can just use a Trainer to fit its parameters. This PR attempts to make that so.

As of dcf7599, there is a bug in adversarial training.

Depends upon #146 and #147.

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest
  • python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu achieves 71% accuracy.
  • python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp datamodule.world_size=2 trainer.devices=2 achieves 71% accuracy.

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

@dxoigmn dxoigmn changed the title Adversary as lightningmodule Adversary as pl.LightningModule Mar 14, 2023
@dxoigmn dxoigmn changed the base branch from main to remove_no_adversary March 14, 2023 00:01
@dxoigmn dxoigmn requested a review from mzweilin March 14, 2023 20:50
@dxoigmn dxoigmn marked this pull request as draft May 16, 2023 20:03
@dxoigmn dxoigmn marked this pull request as ready for review May 16, 2023 20:08
@dxoigmn dxoigmn requested a review from mzweilin May 16, 2023 20:08
@dxoigmn dxoigmn changed the base branch from main to better_optimizer May 22, 2023 21:46

on_run_end()
"""
class Adversary(pl.LightningModule):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally this would inherit from LitModular. Then we can create pre-defined sequences for training.

@mzweilin
Copy link
Contributor

mzweilin commented Jun 1, 2023

I tried to run adversarial training on 2 GPUs but failed. Then hiding perturber parameters accidentally resolved the issue.

python -m mart \
experiment=CIFAR10_CNN_Adv \
trainer=ddp \
trainer.devices=2 \
model.optimizer.lr=0.2 \
trainer.max_steps=2925 \
datamodule.ims_per_batch=256
  File "/home/weilinxu/coder/MART/.venv/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 807, in <listcomp>
    for param_name, param in module.named_parameters(recurse=False)
  File "/home/weilinxu/coder/MART/mart/attack/perturber.py", line 83, in named_parameters
    raise MisconfigurationException("You need to call configure_perturbation before fit.")
pytorch_lightning.utilities.exceptions.MisconfigurationException: You need to call configure_perturbation before fit.

Base automatically changed from better_optimizer to main June 1, 2023 21:12
@@ -0,0 +1,2 @@
attack_in_eval_mode:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we make an accurate name for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should go away in #141.

* Fix progress bar display.

* Enable progress bar for adversary.

* Switch on/off progress bar for adversary in the callback config.

* Make a default progress bar for adversary that can be turned off in FGSM.

* Display progress bars of adversary in multi-rank.

* Display gain instead of loss.

* Make process_position configurable and avoid touching the internal variable.

* Make renaming metrics configurable.
Copy link
Contributor

@mzweilin mzweilin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dxoigmn dxoigmn merged commit 5508657 into main Jun 2, 2023
@dxoigmn dxoigmn deleted the adversary_as_lightningmodule branch June 2, 2023 19:33
@dxoigmn dxoigmn restored the adversary_as_lightningmodule branch June 2, 2023 20:05
@dxoigmn dxoigmn deleted the adversary_as_lightningmodule branch June 2, 2023 20:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants