Releases: alibaba/FederatedScope
Releases · alibaba/FederatedScope
Release v0.3.0
Summarization
The improvements and modifications in this release (FederatedScope v0.3.0) are summarized as follows:
- Tree-based Models. FederatedScope allows users to train tree-based models in vertical federated learning (VFL). We provide the implementations of several widely-used models (such as XGBoost, GBDT, RandomForest, etc.) and dataloaders of benchmark datasets. For different kinds of tree-based models in VFL, users can apply different protection mechanisms (such as DP, OP_Boost, HE, etc.) to adjust the strength of privacy protection accordingly. Note that these modules are also built with event-driven architecture to support both convenient usage and flexible customizations. For more details, please refer to
federatedscope/vertical_fl
). - Efficiency and Effectiveness. We provide several advanced functionalities to improve both efficiency and effectiveness of computation and communication in FL algorithms, including training parallelization (at
federatedscope/core/parallel
), message compression (atfederatedscope/core/ compression
), and robust aggregators (atfederatedscope/core/aggregators
). These provided functionalities can be useful to promote federated learning in both academic research and real-world applications. - Attack and Defense. We provide a range of defense strategies against adversarial attacks, including Krum, Multi-Krum, Median, NormBounding, Bulyan, and Simple Tunning. In addition, we will be releasing a benchmark for backdoor attack and defense on personalization FL, which allows users to test various data poisoning-based backdoor attacks such as BadNet, Blend, SIG, and edge-case.
- Personalization FL. We leverage FederatedScope to establish a comprehensive benchmark for personalized Federated Learning (accepted by NeurIPS'22). During the process, a large number of personalized algorithm implementations have been improved and validated. Some new personalized FL algorithms are also included. We welcome more contributions and feedback for the research and applications of personalized FL!
- More Exploration and Materials. We continue exploring and developing new algorithms in a wide range of FL applications and research topics, including hyperparameter optimization, graph learning, NLP, fairness, and so on. The materials (such as paper lists) of these promising topics are constantly being updated, please feel free to contribute!
FederatedScope aims to provide both easy-to-use functionalities and flexible development interfaces for users. We sincerely hope that FederatedScope can help users and developers in building new FL applications and proposing new Fl algorithms, and greatly welcome the community to contribute via discussions, suggestions, commitments, and other participations.
Thank you very much for your interest!
Commits
Features & Enhancements
- [Tree] Dev label-based tree model for VFL @qbc2016 @xieyxclack (#568, #559, #554, #528)
- Add FedRep and Simple_tuning @Alan-Qin (#564)
- Load data from files @xieyxclack (#558)
- Add compression methods @xieyxclack (#555)
- Add several Byzantine robust algorithms @private-mechanism (#552)
- [Tree] Support distributed mode for VFL @qbc2016 (#553, #476, #471)
- [Tree] Add xgb evaluation @qbc2016 (#549, #434)
- Optimize MF model @rayrayraykk (#544)
- [Tree] Dev feature-based tree models for VFL @xieyxclack @qbc2016 (#533, #530, #529, #523, #510, #497, #482, #476, #439, #427)
- Add FedSGLD Exp @joneswong (#520)
- [HPO] Add SWA @rayrayraykk (#519)
- Add new defense algo and sampler @xieyxclack (#512)
- [HPO] pfedhpo and fts @TheSunWillRise (#509)
- Add GitHub Actions of CI and format check @rayrayraykk (#508, #414)
- Parallelization for standalone mode with NCCL @pan-x-c (#487)
- [Tree] Add feedback during training of xgb @qbc2016 (#484, #438)
- [HPO] Enabled personalized policy for fedex @joneswong (#481)
- Enabled to minimize local entropy @joneswong (#468)
- Add data, model, and algo that FedSAM needs @joneswong (#453)
- [NLP] Initialization for hetero tasks in NLP @xieyxclack (#449)
- Add condition param to autotune; Add feature engineering module @rayrayraykk (#426)
- Add dataset for vertical_fl @qbc2016 (#423)
- Refactor FedRunner, optimize trainer module and optimize CI @rayrayraykk (#415)
- Enhance client_cfg and add new dataset @rayrayraykk (#413)
- [NLP] Development for fl-nlp-hetero-tasks @cheneydon (#410)
- Add more registers & refactor splitter @rayrayraykk @xieyxclack (#466, #394, #372)
- Add more fairness metrics @cuiyuebing (#392)
- Add checks for completeness of msg_handler @rayrayraykk (#388)
- [HPO] Add HPOBench as backend demo for FedHPOBench @rayrayraykk (#474, #381, #377)
- Refactor data-related interfaces & add interfaces for trainer and worker @rayrayraykk (#365)
- Add color logging and move logging related utils to logging @rayrayraykk (#355)
- FedGlobalContrast and FedSimCLR baseline @xkxxfyf (#354)
- Add parameters to control whether to check cfg @rayrayraykk (#351)
- [HPO] Support fairness related vector value in FedHPOB @rayrayraykk (#348)
- [HPO] Add hyperband and randomsearch from Hyperbandster @rayrayraykk (#343)
- Add utils for draw landscape @rayrayraykk (#338)
- [pFL] Remove redundant eval hook for pFedMe @yxdyc (#337)
- Add new system model @rayrayraykk (#336)
- Membership inference attack: add comparison when target data is in/not in the training batch @Osier-Yi (#335)
- [HPO] Add grid search and twitter to FedHPO-B @rayrayraykk (#324, #320)
- Make the message comparator more robust @yxdyc (#314)
- [HPO] Add BO_GP and BO_RF @rayrayraykk (#311)
- [HPO] Enable wrap hpbandster for FedEx @rayrayraykk (#308)
- [HPO] Apply the updates according to the exp of fedhpo-b @joneswong (#307)
- Support using cached data and re-splitting for huggingface datasets @yxdyc (#302)
- Re-organize aggregators @xieyxclack (#299)
- Support help and required argument for the configs @yxdyc (#294)
- Add cross-device recsys dataset Netflix @rayrayraykk (#281)
Bug fixes
- Bug fix for nbafl @DavdGao (#566)
- Bugfix for vfl algos and datasets @xieyxclack @qbc2016 (#550, #548, #537, #506, #475, #436)
- Minor fixes for cfg, scripts, docs, and others @xieyxclack @rayrayraykk @Osier-Yi @qbc2016 (#547, #541, #546, #522, #513, #485, #461, #442, #437, #424, #405, #387, #352, #345, #323, #319, #318, #315, #306)
- Fix python version in dockerfile @rayrayraykk (#536)
- Fix bug in transfomer_builder @rayrayraykk (#515)
- Change default value of msg.content to 'None' @xieyxclack (#496)
- Install libxml-parser-perl in test_atc @xieyxclack (#495)
- Modify the import source of update_logger @private-mechanism (#491)
- Fix roc_auc @rayrayraykk (#467)
- Fix the state of message for evaluation @xieyxclack (#470)
- Fix b-local dissim @rayrayraykk (#463)
- Fix torch trainer example @rayrayraykk (#451)
- Set pin_memory to False to avoid OOM @rayrayraykk (#444)
- Fix rmse metric @DavdGao (#378)
- Fix early_stop when the metric is the larger the better @rayrayraykk (#374)
- Bugfix for merge_data @yxdyc (#385)
- Bugfix for yaml dump due to the Argument class @yxdyc (#358)
- Fix one-shot exp utils @rayrayraykk (#317)
- Fix client state error @joneswong (#291)
- Fix twitter related bugs and merge_test_data @rayrayraykk (#284)
- Fix call_link_level_trainer() and call_node_level_trainer() @ahn1340 (#274)
Documents & Materials
- Update news and README @xieyxclack @rayrayraykk (#569, #563, #526, #521, #402, #395, #361, #326, #295)
- Update README for tree-based models @qbc2016 (#567, #556)
- Update paper list: attack, fairness, incentive @Osier-Yi (#565, #407)
- Update docs of script and configs @joneswong @yxdyc @DavdGao @Osier-Yi @xieyxclack @rayrayraykk @qbc2016 (#562, #380, #347 #325, #316, #305, #303, #301, #300, #297, #296, #293, #292, #287, #286, #285, #282)
- Update paper list for pFL @yxdyc (#561, #421)
- Update paper list for FL-NLP @cheneydon (#560, #504, #419, #283)
- Update paper list for untargeted attacks in FL @private-mechanism (#507)
- Update paper list for FL-Tree @qbc2016 (#493, #422, #401)
- Update paper list for FL-Rec @xieyxclack (#425)
- Update gfl paper @rayrayraykk @joneswong (#420, #418, #350)
- Update README.md for installation @rayrayraykk (#417)
- Add paper list for self-supervision, multi-task and medical data on federated learning @xkxxfyf (#375)
- Add KDD'22 tutorial material and news icon @xieyxclack (#362)
- Supplement copyright of GraphGym @joneswong (#333)
- Add scripts for FedProx on Cora @rayrayraykk (#331)
- Doc scripts pfl @yxdyc (#328)
- Update notebook tutorials @rayrayraykk (#322)
- Add scripts for running gcn with dp @rayrayraykk (#313)
- Add README for FS-G @rayrayraykk (#279)
Release v0.2.0
Summarization
The improvements included in this release (FederatedScope v0.2.0) are summarized as follows:
- FederatedScope allows users to apply asynchronous training strategies in federated learning with event-driven architecture, including different aggregation conditions, staleness toleration, broadcasting manners, etc. And we support an efficient standalone simulation for cross-device FL with a large number of participants.
- We add three benchmarks for Federated HPO, Personalized FL, and Hetero-Task FL to promote the application of federated learning in a wide range of scenarios.
- We ease the installation, setup, and continuous integration (CI), and make them more friendly for users to get started and customize. And useful visualization functionalities are added into FederatedScope for users to monitor the training process and evaluation results.
- We add paper lists of related topics, including FL-Recommendation, Federated-HPO, Personalized FL, Federated Graph Learning, FL-NLP, FL-Attacker, FL-Incentive-Mechanism, and so on. These materials are constantly being updated.
- Several novel features are also included in this release, such as performance attacks, organizer, unseen clients generalization, splitter, client sampler, and so on, which enhance FederatedScope's robustness and comprehensiveness.
Commits
🚀 Enhancements & Features
- Add backdoor attack @Alan-Qin (#267)
- Add organizer to FederatedScope @rayrayraykk (#265, #257)
- Monitoring the client-wise and global wandb info @yxdyc (#260, #226, #206, #176, #90)
- More friendly guidance of installation, setup and contribution @rayrayraykk (#255, #192)
- Add learning rate scheduler in FS @DavdGao (#248)
- Support different types of keys when communicating via grpc @xieyxclack (#239)
- Support constructing FL course when server does not have data @xieyxclack (#236)
- Enabled unseen clients case to check the participation generalization gap @yxdyc (#238, #100)
- Support more robust type conversion in yaml file @yxdyc (#229)
- Asynchronous Federated Learning @xieyxclack (#225)
- Support both pre- and post-merging data for the "global" baseline @yxdyc (#220)
- Format the code by flake8 @rayrayraykk (#211, #207)
- Add paper list of FL-Attacker and FL-Incentive-Mechanism @Osier-Yi (#203, #202, #201)
- Add client samplers @xieyxclack (#200)
- Modify the log for hooks_in_train/test @DavdGao (#181)
- Modification of the finetune mechanism @DavdGao (#177)
- Add FedHPO-B, a benchmark suite for federated hyperparameter optimization @rayrayraykk @joneswong (#173, #146, #127)
- Add pFL-Bench, a comprehensive benchmark for personalized Federated Learning @yxdyc (#169, #149)
- Add B-FHTL, a benchmark suite for studying federated hetero-task learning @DavdGao (#167, #150)
- Update splitter for consistent label distribution @xieyxclack (#154)
- Improve SHA wrapper @joneswong (#145)
- Add slack & DingDing group @xieyxclack (#142)
- Add FedEx @joneswong @rayrayraykk (#141, #137, #120)
- Enable single thread HPO @joneswong (#140)
- Refactor autotune module @joneswong (#133)
- Add paper list of federated database @DavdGao (#129)
- A quadratic objective function-based experiment @joneswong (#111)
- Support optimizers with different parameters @DavdGao (#96)
- Demo how to use SMAC for FedHPO @joneswong (#88)
- FLIT for federated graph classification/regression @wanghh7 (#87)
- Add momentum for the optimizer in server @DavdGao (#86)
- Add an example for distributed mode @xieyxclack (#85)
- Add readme for vFL @xieyxclack (#83)
- Add paper list of FL-NLP @cheneydon (#81)
- Add more models and datasets from external packages. @rayrayraykk (#79, #42)
- Add pFL paper list @yxdyc (#73, #72)
- Add paper list of FedRec @xieyxclack (#68)
- Add paper list of FedHPO @joneswong (#67)
- Add paper list of federated graph learning. @rayrayraykk (#65)
🐛 Bug Fixes
- Fix ditto trainer @yxdyc (#271)
- Fix personalization when module has lazy load hooks @rayrayraykk (#269)
- Fix the wrongly early_stopper.track_and_check calling in client @yxdyc (#237)
- Fix type conversion error and invalid logging in distributed mode @rayrayraykk (#232, #223)
- Fix the cpu and memory wastage problems caused by multiprocess @yxdyc (#212)
- Fix for invalid sample_client_num in some situation @yxdyc (#210)
- Fix the url of GFL dataset @rayrayraykk (#196)
- Fix twitter dataset @rayrayraykk (#187)
- BugFix for monitor and logger @rayrayraykk @rayrayraykk (#188, #175, #109)
- Fix download url @Osier-Yi @rayrayraykk @xieyxclack (#101, #95, #92, #76)
Release v0.1.0
Release FederatedScope v0.1.0