An easy-to-learn, easy-to-extend, and for-fair-comparison codebase based on PyTorch for federated learning (FL). Please note that this repository is designed mainly for research, and we discard lots of unnecessary extensions for a quick start. Example usage: when you want to recognize activities of different persons without accessing their privacy data; when you want to build a model based on multiple patients' data but not access their own data.
As initial version, we support the following algoirthms. We are working on more algorithms.
- Baseline, train in the client without communication.
- FedAvg [1].
- FedProx [2].
- FedBN [3].
- FedAP [4].
- MetaFed [5].
- FedCLIP [6].
NOTE: The code for FedCLIP is located at ./fedclip
. This folder is independent of other folders in this repo. You can just download the folder and run it for this algorithm.
git clone https://github.com/microsoft/PersonalizedFL.git
cd PersonalizedFL
We recommend to use Python 3.7.1
and torch 1.7.1
which are in our development environment.
For more environmental details and a full re-production of our results, please refer to luwang0517/torch10:latest
(docker) or jindongwang/docker
(docker).
Our code supports the following dataset:
If you want to use your own dataset, please modifty datautil/prepare_data.py
to contain the dataset.
- Modify the file in the scripts
bash run.sh
We offer a benchmark for OrganS-MNIST. Please note that the results are based on the data splits in split/medmnist0.1
. Different data splits may lead different results. For complete parameters, please refer to run.sh
.
Non-iid alpha | Base | FedAvg | FedProx | FedBN | FedAP | MetaFed |
---|---|---|---|---|---|---|
0.1 | 73.99 | 75.62 | 75.97 | 79.96 | 81.33 | 83.87 |
0.01 | 75.83 | 74.81 | 75.09 | 81.85 | 82.87 | 84.98 |
It is easy to design your own method following the steps:
-
Add your method to
alg/
, and add the reference to it inalg/algs.py
. -
Midify
scripts/run.sh
and execuate it.
The toolkit is under active development and contributions are welcome! Feel free to submit issues and PRs to ask questions or contribute your code. If you would like to implement new features, please submit a issue to discuss with us first.
[1] McMahan, Brendan, et al. "Communication-efficient learning of deep networks from decentralized data." Artificial intelligence and statistics. PMLR, 2017.
[2] Li, Tian, et al. "Federated optimization in heterogeneous networks." Proceedings of Machine Learning and Systems 2 (2020): 429-450.
[3] Li, Xiaoxiao, et al. "FedBN: Federated Learning on Non-IID Features via Local Batch Normalization." International Conference on Learning Representations. 2021.
[4] Lu, Wang, et al. "Personalized Federated Learning with Adaptive Batchnorm for Healthcare." IEEE Transactions on Big Data (2022).
[5] Yiqiang, Chen, et al. "MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare." FL-IJCAI Workshop 2022.
[6] Lu, Wang, et al. "FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning." IEEE Data Engineering Bulletin 2023.
If you think this toolkit or the results are helpful to you and your research, please cite us!
@Misc{PersonalizedFL,
howpublished = {\url{https://github.com/microsoft/PersonalizedFL}},
title = {PersonalizedFL: Personalized Federated Learning Toolkit},
author = {Lu, Wang and Wang, Jindong}
}
- Wang lu: [email protected]
- Jindong Wang: [email protected]
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.