Skip to content

harrywuhust2022/Crowd_Counting_Robustness_Evaluation

Repository files navigation

Evaluating the Robustness of Crowd Counting Models via Adversarial Patch Attacks and Randomized Ablation

This is the official implementation code of paper submitted to IEEE Transactions on Information Forensics and Security.

We are the first work to evaluate the adversarial robustness of crowd counting both theoretically and empirically.

Requirement

  1. Install pytorch 1.0.0+

Data Setup

  1. Download ShanghaiTech Dataset from Dropbox or Baidu Disk (code: a2v8)

Attacked Models

CSRNet: https://github.com/CommissarMa/CSRNet-pytorch

CAN: https://github.com/CommissarMa/Context-Aware_Crowd_Counting-pytorch

MCNN: https://github.com/svishwa/crowdcount-mcnn

CMTL: https://github.com/svishwa/crowdcount-cascaded-mtl

DA-Net: https://github.com/BigTeacher-777/DA-Net-Crowd-Counting

Thanks for these researchers share the code!

How to Attack?

Please run the python file Val_CSR_mae.py

How to Retrain the Crowd Counting Models?

Please run the python file csr_certify_train.py

Want to Gain the Certificate Retrained Models?

Baidu Disk(code:hary)

About

Submit to IEEE TIFS, still under review.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages