Skip to content

Latest commit

 

History

History
37 lines (25 loc) · 2.53 KB

README.md

File metadata and controls

37 lines (25 loc) · 2.53 KB

FreeAdversarialTraining

This repo contains the PyTorch implementation of the paper 'Adversarial Training for Free!'

In this work, we train and evaluate the ResNet-50 model on the Intel Image Classification dataset, which can be found here.

In order to run the notebook locally, make sure to meet the following library requirements:

  • tqdm
  • gdown
  • torch

You can install them through:

pip install -U tqdm gdown

Refer to the PyTorch website for local installation of PyTorch.

Talking about our work, we first train our model through a simple PyTorch training loop and afterwards using the Free Adversarial training algorithm proposed in the paper, shown below:

===================================================================

==================================================================

The main point and goal of this algorithm is building a model which is robust to PGD-attacks, but that at the same time is cheap and fast to train (7 to 30 times faster than other strong adversarial training methods).

Later on, we validate both models using the PyTorch's common validation loop and finally on PGD-7/-10/-20/-40 attacks.

Results, as requested, can be seen at the very bottom of the .ipynb notebook file.

In order to quickly validate the best models without having to re-run the training loop, we do provide two pre-trained models in order to replicate the final results:

  • ResNet50 with Intel Image Classification trained with Free Adversarial Training m = 2 : Link (~180 MB)
  • ResNet50 with Intel Image Classification trained with Free Adversarial Training m = 3 : Link (~180 MB)
  • ResNet50 with Intel Image Classification trained with Free Adversarial Training m = 5 : Link (~180 MB)
  • ResNet50 with Intel Image Classification trained with standard PyTorch Training: Link (~180 MB)

For testing, simply go to the "Validation (for testing purposes)" section in the notebook file, load the model downloaded earlier and run the validation cell with the desired K-value for PGD attack.