Tong Wu1, Feiran Jia2, Xiangyu Qi1, Jiachen T. Wang1, Vikash Sehwag1, Saeed Mahloujifar1, Prateek Mittal1
Princeton University 1, Penn State University2
Test-time Adaptation (TTA), where a model is modified based on the test data it sees, has been a promising solution for distribution shift. This paper demonstrates that TTA is subject to novel security risks, where malicious test data can cause predictions about clean and unperturbed data to be incorrect. This suggests that adaptive model (model relies on the interaction of test inputs) have yet another attack vector that can be exploited. [Source]The code is tested with Python 3.8 and PyTorch 1.13.1. The code should be compatible with other versions of packages. For other packages, use pip install -r requirement.txt
You need to download the CIFAR-C and ImageNet-C data to ../data/
.
Some CIFAR-C and ImageNet-C's models are auto downloaded from RobustBench.
More configs can be found in conf.py
(Attacking options)
Examples for running code for ImageNet-C
python imagenet_test_attack.py --cfg cfgs/imagenetc/tent.yaml MODEL.ARCH Standard_R50 ATTACK.TARGETED True DATA_DIR "../data" CORRUPTION.SEVERITY [3]
Examples for running code for CIFAR-C
python cifar_test_attack.py --cfg cfgs/cifar10/tent.yaml MODEL.ARCH Standardwrn28 ATTACK.TARGETED True DATA_DIR "../data" CORRUPTION.SEVERITY [3]
This code has been built upon the code accompanying the papers
"Tent: Fully Test-time Adaptation by Entropy Minimization" at https://github.com/DequanWang/tent.
"Test-time Adaptation via Conjugate Pseudo-Labels" at https://github.com/locuslab/tta_conjugate.git.
If anything is unclear, please open an issue or contact Tong Wu ([email protected]).