- 2023-11-16 Additional Note
- 2023-09-01 Accepted & Early Access
- 2023-08-03 MI-FGSM, SNR measurement added
- 2023-07-27 First Decision: Major revision
- 2023-07-09 Submitted to IEEE Access
This repository implements the paper "On the Defense of Spoofing Countermeasures against Adversarial Attacks". This is our attempt to defend against FGSM
and PGD
attacks using band-pass filter
and VisuShrink denoising
techniques.
We made several changes to the base repository, please refer to the full credits below.
conda env create -f env.yml
Make sure to resolve any problems regarding dependencies.
We have re-factored the codebase so that it can be run step-by-step, but make sure to modify files in the_config/
folder and the code arguments below. Two augmentation techniques should be run independently for the two experiments. Make sure to spare 1TB (one terabyte) of hard drive for a complete experiment. Otherwise, one can run an attack on a single model (for example, FGSM
attack on an LCNN
occupies 150GB of disk space.)
Github does not allow embedding audio
contents so I have to used mp4
embedding instead. Make sure to turn on the speaker buttons below.
Bandpass filter has the strongest effect of removing noise from the original audio, whereas adversarial sample does not necessarily have noisier output.
Original sample
LA_E_1239941_original.mp4
Adversarial sample
LA_E_1239941_adv.mp4
Denoised sample
LA_E_1239941_denoised.mp4
Bandpassed sample
LA_E_1239941_bandpassed.mp4
- Some parts of the code are for
distillation
process. They are not required to reproduce the result of the current paper. - During experiments, we used similar settings for fair comparison.
- The upstream implementation of the authors can be slightly different from report in their paper.
-
VisuShrink
denoising: https://github.com/AP-Atul/Audio-Denoising -
sox
for band-pass filter: https://sox.sourceforge.net -
Adversarial Robustness toolbox (ART)
: https://github.com/Trusted-AI/adversarial-robustness-toolbox -
torchattacks
: https://adversarial-attacks-pytorch.readthedocs.io/ -
We thank the authors of the paper "Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification" for their code base of the two models
LCNN
andSENet
. Their code base can be found here: https://github.com/ano-demo/AdvAttacksASVspoof. Previously I created a fork from this repo, which is located https://github.com/nguyenvulong/AdvDefenseCM_legacy. -
Today (2023-11-16), I discovered a paper name "DOMPTEUR: Taming Audio Adversarial Examples" where the authors also did a similar technique to limit the frequencty to
300−5000Hz
. Unfortunately, my finding was too late so I could not reference this paper in my manuscript. Even though the our study was independently conducted, I would like to shout out to the authors since they are way earlier than us in using this method to defend against adversarial attacks in Automatic Speech Recognition (ASR) systems. While our study is about spoofing countermeasures, the effect should be very similar if not identical.