-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix adversarial image visualizer with canonical batches #227
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope this visualizer goes away one day.
self.target = batch["target"] | ||
# Save canonical input and target for on_train_end | ||
self.input = batch[0] | ||
self.target = batch[1] | ||
|
||
def on_train_end(self, trainer, model): | ||
# FIXME: We should really just save this to outputs instead of recomputing adv_input |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hopefully we do this one day...
visualizer.on_train_batch_end(trainer, model, outputs, batch, 0) | ||
visualizer.on_train_end(trainer, model) | ||
visualizer.on_train_batch_end(trainer, adversary, outputs, batch, 0) | ||
visualizer.on_train_end(trainer, adversary) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean this no longer works with universal perturbations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this writes to disk (which is a bad idea). I wish this wrote to tensorboard. I swear there was a PR that did that...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I will revisit the visualizer later when I have a chance.
What does this PR do?
This PR fixes the adversarial image visualizer after canonicalizing adversarial batches.
Adversary.forward()
to get adversarial examples.Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest
CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16
reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2
reports 70% (14 sec/epoch).Before submitting
pre-commit run -a
command without errorsDid you have fun?
Make sure you had fun coding 🙃