The implementation of the paper "Simple and Efficient Partial Graph Adversarial Attack: A New Perspective", under the setting of global attack, treats different nodes differently to perform more efficient adversarial attacks.
- models: implementation of GNN models
- victims: experiments for training
- configs: configurations of models
- models: trained models
- attackers: implementation of attack methods
- attack: experiments for attacking
- configs: hyperparameter of attackers
- perturbed_adjs: adversarial adj generated
- training models
> cd victims
> python train.py --model=gcn --dataset=cora
- performing attacks
> cd attack
> python gen_attack.py
- training models
> cd victims
> python train.py
- performing attack
> cd attack
> python gen_attack.py --attack=pga --dataset=cora
> python evasion_attack.py --victim=robust --dataset=cora
> python evasion_attack.py --victim=normal --dataset=cora
> python poison_attack.py --victim=gcn --dataset=cora
> python poison_attack.py --victim=gat --dataset=cora
- deeprobust
- torch_geometry
- torch_sparse
- torch_scatter