Evaluating the Robustness of Crowd Counting Models via Adversarial Patch Attacks and Randomized Ablation
This is the official implementation code of paper submitted to IEEE Transactions on Information Forensics and Security.
We are the first work to evaluate the adversarial robustness of crowd counting both theoretically and empirically.
- Install pytorch 1.0.0+
- Download ShanghaiTech Dataset from Dropbox or Baidu Disk (code: a2v8)
CSRNet: https://github.com/CommissarMa/CSRNet-pytorch
CAN: https://github.com/CommissarMa/Context-Aware_Crowd_Counting-pytorch
MCNN: https://github.com/svishwa/crowdcount-mcnn
CMTL: https://github.com/svishwa/crowdcount-cascaded-mtl
DA-Net: https://github.com/BigTeacher-777/DA-Net-Crowd-Counting
Thanks for these researchers share the code!
Please run the python file Val_CSR_mae.py
Please run the python file csr_certify_train.py
Baidu Disk(code:hary)