This is the code for Boosting Certified $\ell_\infty$-dist Robustness with EMA Method and Ensemble Model. We use the EMA technique and model ensemble method to improve the performance and robustness of our model. We also use
- torch 1.8.1
- torchvision 0.9.1
- numpy 1.20.2
- matplotlib 3.4.0
- tensorboard
After cloning this repo into your computer, first run the following command to install the CUDA extension, which can speed up the training procedure considerably.
python setup.py install --user
You can train your
python main.py
Choose --model
(MLP, Conv, LeNet, AlexNet, VGGNet) for network architecture, --dataset
(MNIST, FashionMNIST, CIFAR10, CIFAR100) for dataset, --predictor-hidden-size
for the hidden size of Predictor, --loss
(hinge, cross_entropy) for loss function type and --opt
(adamw, madam) for optimizer type.
You can also train your ensemble
python main_ensemble.py
In addition to the above options, you can choose --model-num
for number of ensemble models.
In this repo, we provide complete training scripts as well. You can run the scripts directly to reproduce the results on MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100 datasets in our paper. The scripts are in the command
folder.
For example, to reproduce the results of MNIST using a single
bash command/lipnet++_mnist.sh
And to reproduce the results of CIFAR-10 using ensemble
bash command/liplenet++_ensemble_cifar10.sh
We also support multi-GPU training using distributed data parallel. By default the code will use all available GPUs for training. To use a single GPU, add the following parameter --gpu GPU_ID
where GPU_ID
is the GPU ID. You can also specify --world-size
, --rank
and --dist-url
for advanced multi-GPU training.
The model is automatically saved when the training procedure finishes. Use --checkpoint model_file_name.pth
to load a specified model before training. You can use --start-epoch NUM_EPOCHS
to skip training and only test the model's performance for a pretrained model, where NUM_EPOCHS
is the number of epochs in total.
By default the code will generate three files named train.log
, test.log
and log.txt
which contain all training logs. If you want to further display training curves, you can add the parameter --visualize
to show these curves using Tensorboard.
Please contact [email protected] if you have any question on our paper or the codes. Enjoy!