Skip to content

Releases: facebookresearch/FLSim

Release v0.1.0

27 Jul 17:39
Compare
Choose a tag to compare
release v0.1.0

Summary: Major release with new features (servers and channels)

Reviewed By: JohnlNguyen

Differential Revision: D38189339

fbshipit-source-id: f8ca41dab6d380233ba3be23701238f2a002f5c6

FLSim v0.0.2

26 Jul 19:43
Compare
Choose a tag to compare
Release new FLSim minor version for SecureFLCompression

Summary: We propose to release version 0.0.2, including namely quantization primitives, and add this as a dependency to our new repo [SecureFLCompression](https://github.com/facebookresearch/SecureFLCompression).

Reviewed By: karthikprasad

Differential Revision: D38145513

fbshipit-source-id: 8c66122973dad2cd35cae1c60c3b4a95b44d550d

FLSim v0.0.1

09 Dec 20:32
Compare
Choose a tag to compare

We are excited to announce the release of FLSim 0.0.1.

Introduction

How does one train a machine learning model without access to user data? Federated Learning (FL) is the technology that answers this question. In a nutshell, FL is a way for many users to learn a machine learning model without sharing data collaboratively. The two scenarios for FL, cross-silo and cross-device. Cross-silo provides technologies for collaborative learning between a few large organizations with massive silo datasets. Cross-device provides collaborative learning between many small user devices with small local datasets. Cross-device FL, where millions or even billions of users cooperate on learning a model, is a much more complex problem and attracted less attention from the research community. We designed FLSim to address the cross-device FL use case.

Federated Learning at Scale

Large-scale cross-device Federated Learning (FL) is a federated learning paradigm with several challenges that differentiate it from cross-silo FL: millions of clients coordinating with a central server and training instability due to the significant cohort problem. With these challenges in mind, we built FLSim to be scalable while easy to use, and FLSim can scale to thousands of clients per round using only 1 GPU. We hope FLSim will equip researchers to tackle problems with federated learning at scale.

FLSim

Library Structure

FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

Included Datasets

Currently, FLSim supports all datasets from LEAF including FEMNIST, Shakespeare, Sent140, CelebA, Synthetic and Reddit. Additionally, we support MNIST and CIFAR-10.

Included Algorithms

FLSim supports standard FedAvg, and other federated learning methods such as FedAdam, FedProx, FedAvgM, FedBuff, FedLARS, and FedLAMB.

What’s next?

We hope FLSim will foster large-scale cross-device FL research. Soon, we plan to add support for personalization in early 2022. Throughout 2022, we plan to gather feedback and improve usability. We plan to continue to grow our collection of algorithms, datasets, and models.