- 2023-07-27 First Decision: Major revision
- 2023-07-09 Submitted to IEEE Access
This repository implements the paper "On the Defense of Spoofing Countermeasures against Adversarial Attacks". This is our attempt to defend against FGSM
and PGD
attacks using band-pass filter
and VisuShrink denoising
techniques.
We made several changes to the base repository, please refer to the full credits below.
conda env create -f env.yml
Make sure to resolve any problems regarding dependencies.
We have re-factored the codebase so that it can be run step-by-step, but make sure to modify files in the_config/
folder and the code arguments below. Two augmentation techniques should be run independently for the two experiments. Make sure to spare 1TB (one terabyte) of hard drive for a complete experiment. Otherwise, one can run an attack on a single model (for example, FGSM
attack on an LCNN
occupies 150GB of disk space.)
- Some parts of the code are for
distillation
process. They are not required to reproduce the result of the current paper. - During experiments, we used similar settings for fair comparison.
- The upstream implementation of the authors can be slightly different from report in their paper.
VisuShrink
denoising: https://github.com/AP-Atul/Audio-Denoisingsox
for band-pass filter: https://sox.sourceforge.netAdversarial Robustness toolbox (ART)
: https://github.com/Trusted-AI/adversarial-robustness-toolboxtorchattacks
: https://adversarial-attacks-pytorch.readthedocs.io/- We thank the authors of the paper "Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification" for their code base of the two models
LCNN
andSENet
.