Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FedProx: getting same model depsite different mu values #823

Open
chinglamchoi opened this issue May 2, 2023 · 0 comments
Open

FedProx: getting same model depsite different mu values #823

chinglamchoi opened this issue May 2, 2023 · 0 comments
Assignees

Comments

@chinglamchoi
Copy link

chinglamchoi commented May 2, 2023

Hi, I'm using the FedProx optimizer and following the PyTorch MNIST demos. I passed different mu values (I tried 0, 0.001, 0.01, 0.1, 1, 2, 5, 10) but still got the same trained client and server models for the same seed (but different mu's).

In the original paper, mu should control the degree of personalization, where mu=0 is equivalent to FedAvg. In this implementation, how does mu affect the model training? Could you point me to the relevant files in which mu is used in optimization?

I found Line 93 in openfl/utilities/optimizers/torch/fedprox.py, d_p.add_(p - w_old_p, alpha=mu). I verified that my mu values had not been overwritten and were different. Other than that, I couldn't find anything else that directly uses mu in optimization.

Thanks!

@tanwarsh tanwarsh self-assigned this Dec 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants