Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy check #7

Open
ning-wang1 opened this issue Oct 9, 2020 · 2 comments
Open

Accuracy check #7

ning-wang1 opened this issue Oct 9, 2020 · 2 comments

Comments

@ning-wang1
Copy link

I did not find the code for accuracy check as mentioned in this paper. Is the 'accuracy check' included in the source code? In other words, will the central server check the accuracy of model updates from different participants before aggregating them?

Further, would you please give the parameters (or a running command) to reproduce the results in the paper on attacking 'krum' and 'coomed' aggregation rule?

@TudouJack
Copy link

Hello, did you successfully reproduce the attack on the krum and coomed aggregation rules?
I set the parameters as written in the paper, set λ=2 and used alternating minimization when attacked krum, and set λ=1 and used targeted model poisoning when attacked coomed. The 2 experiments all failed.
How to set proper parameters to reproduce the results in the paper on attacking 'krum' and 'coomed' aggregation rule?

@arjunbhagoji
Copy link
Collaborator

arjunbhagoji commented Jul 12, 2021

I did not find the code for accuracy check as mentioned in this paper. Is the 'accuracy check' included in the source code? In other words, will the central server check the accuracy of model updates from different participants before aggregating them?

For simplicity, the code as implemented does not discard updates from agents with accuracy lower than a given threshold, as we found that, even with the attack, the accuracy of all agents is satisfactorily high, and will not trigger removal in a realistic setting. That said, this function can be easily implemented and I would urge you to submit a PR if possible.

Further, would you please give the parameters (or a running command) to reproduce the results in the paper on attacking 'krum' and 'coomed' aggregation rule?

The results can be easily reproduced by running the following commands for coordinate-wise median

python dist_train_w_attack.py --dataset=fMNIST - -k=10 --C=1.0 --E=5 --T=40 --train --model_num=0 --gar=coomed --gpu_ids 0

and krum respectively, where LAMBDA=2 to reproduce the results from the paper exactly:

python dist_train_w_attack.py --dataset=fMNIST --k=10 --C=1.0 --E=5 --T=40 --train --model_num=0 --mal --mal_obj=single --mal_strat=converge_train_alternate_wt_o_dist_self --rho=1e-4 --gar=krum --ls=10 --mal_E=10 --gpu_ids 0 --mal_boost=LAMBDA

Note that --gar has been changed from avg to coomed and krum respectively.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants