Skip to content

Commit

Permalink
add README and example_config (#1)
Browse files Browse the repository at this point in the history
Co-authored-by: yuexiang.xyx <[email protected]>

Update `tests/run.py` for Jenkins server (alibaba#4)

just a workaround

Feature/synchronize (alibaba#3)

sync with the master branch of our original gitlab

Feature/config refactor (alibaba#5)

refactored configuration-related code

modify README; minor fix (alibaba#6)

Updated README

fix gan cra loss_batch-> loss_task bug

improved the environments set-up guidance

improved the environments set-up guidance

improved the environments set-up guidance

Fix setup requirements.

Update required python version to 3.9.

updated auto-doc component according to the latest changes

[Feature] Add dropout and log training metric. (alibaba#11)

* Add dropout option for CNN and NLP model; Add training metric to logs.

* allow users to determine whether to conduct evaluation on a specific split

* Enable metric in global eval for users to determine whether to conduct evaluation on a specific split.

* fix minor bug when importing nlp loss

* Replace and remove `validate` with `evaluate(target_data_split_name=split)` to keep code clean.

enabled the log file name valid in windows environment (alibaba#13)

* enabled the log file name valid in windows environment

update readme (alibaba#15)

* update README

added a demo for black-box optimization (alibaba#14)

- added a demo for black-box optimization
- enabled installation with cuda10

[Bugfix] fixed the invalid logger set-up if the logging is used before we call setup_logger (alibaba#17)

* fixed the invalid logger set-up if the `logging` is used before we call `setup_logger`

Change source of `download_url` from our own and fix `README` (alibaba#20)

* Change source of `download_url` from our own `utils.py` and fix `README.md`.

add logo (alibaba#26)

- add logo
- add more icons

modify grpc_comm according to official tutorial (alibaba#25)

fix path issue

fix wrong logger usage

reformatted

Communication efficiency optimization (alibaba#19)

* minor fixed for distributed mode

* For the communication efficiency: dynamic type selection in gRPC servicer; transformer & parser

Refactored the logger by reducing its redundancy and fixed some minor issues (alibaba#29)

* Reducing the redundancy of the logger

Update test_mf.py

modify the unit test of mf task

Refactor splitter&transform; Modify some data related config; Add external dataset. (alibaba#33)

[Feature] FedEx (alibaba#37)

[Feature] FedEx (alibaba#37)

[Hotfix] print the missing ``Final`` results (alibaba#41)

* hotfix for the missing ``Final`` results print

Add pre-trained transformers as NLP model.

TODO:@ZHEN, please fix online aggregator when the device is not specific.

Add a example for transformers.

Fix url. (alibaba#46)

- added the local training baseline
- enabled each client has its own early-stopper

formatted by linter

formatted by linter

not use early_stopper in non-local mode

bugfix for the cast "sample_client_num = -1"

added global training mode via a proxy client that holds all data

Fix un-consistent device for the PIA test

added local fine-tuning before local evaluation

linter format

bugfix for fedex

update README (alibaba#49)

Feature/attack doc (alibaba#50)

* improved the doc for attack module

added API comments (alibaba#52)

Fix docs about graph. (alibaba#51)

Add api ref for mf task and context (alibaba#53)

* add mf api reference and modify README.md

typos fix

Fix minor bugs

Timeout strategy and minimal received number (alibaba#36)

* For async: timeout strategy and minimal received number

modify api reference (alibaba#56)

update doc of core (alibaba#57)

Add datasets from hugging face.

Formatted and fix minor bugs.

Add datasets and scripts for openml.

Modify the example `yaml` of openml datasets.

Add materials (paper lists, tutorials) (alibaba#60)

* add FL paper list

Add paper lists (alibaba#61)

* add FL paper list

fixed some missing API reference in fs.core (alibaba#54)

As the title says.

update release version (alibaba#64)

Update graph paper list. (alibaba#65)

Add paper list for FedHPO (alibaba#67)

* added paper list for fedhpo

rename and modify some val

Add paper list for FedRec (alibaba#68)

add paper list for FedRec

added pfl paper list (alibaba#72)

added pfl paper list

hotfix for transformers to avoid import error

updated pfl paper list (alibaba#73)

updated pfl paper list

fix url in dblp_new.py (alibaba#76)

update README

update

debug squad model

update

update

update
  • Loading branch information
xieyxclack authored and cheneydon committed Jun 14, 2022
1 parent 8086c34 commit dd5e50f
Show file tree
Hide file tree
Showing 389 changed files with 14,918 additions and 3,471 deletions.
Binary file added .DS_Store
Binary file not shown.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -133,3 +133,5 @@ dmypy.json

# Pyre type checker
.pyre/

.idea/
44 changes: 25 additions & 19 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -335,25 +335,6 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

--------------------------------------------------------------------------------

Code in federatedscope/core/communication.py is adapted from
https://github.com/FedML-AI/FedML

Copyright [FedML] [Chaoyang He, Salman Avestimehr]

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

--------------------------------------------------------------------------------

Expand Down Expand Up @@ -406,3 +387,28 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

--------------------------------------------------------------------------------

Code in federatedscope/autotune/fedex/server.py is adapted from
https://github.com/mkhodak/FedEx (MIT License)

Copyright (c) 2021 Mikhail Khodak

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

--------------------------------------------------------------------------------
203 changes: 203 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
<h1 align="center">
<img src="https://img.alicdn.com/imgextra/i1/O1CN01P4ImT91Yj6B2WPQVK_!!6000000003094-2-tps-1207-625.png" width="400" alt="federatedscope-logo">
</h1>

![](https://img.shields.io/badge/language-python-blue.svg)
![](https://img.shields.io/badge/license-Apache-000000.svg)
[![Playground](https://shields.io/badge/JupyterLab-Enjoy%20Your%20FL%20Journey!-F37626?logo=jupyter)](https://try.federatedscope.io/)
[![Contributing](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://federatedscope.io/docs/contributor/)

FederatedScope is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry. Based on an event-driven architecture, FederatedScope integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively.

A detailed tutorial is provided on [Tutorial](https://federatedscope.io/).

## Quick Start

We provide an end-to-end example for users to start running a standard FL course with FederatedScope.

### Step 1. Installation

First of all, users need to clone the source code and install the required packages (we suggest python version >= 3.9).

```bash
git clone https://github.com/alibaba/FederatedScope.git
cd FederatedScope
```
You can install the dependencies from the requirement file:
```
# For minimal version
conda install --file enviroment/requirements-torch1.10.txt -c pytorch -c conda-forge -c nvidia
# For application version
conda install --file enviroment/requirements-torch1.10-application.txt -c pytorch -c conda-forge -c nvidia -c pyg
```
or build docker image and run with docker env:
```
docker build -f enviroment/docker_files/federatedscope-torch1.10.Dockerfile -t alibaba/federatedscope:base-env-torch1.10 .
docker run --gpus device=all --rm --it --name "fedscope" -w $(pwd) alibaba/federatedscope:base-env-torch1.10 /bin/bash"
```
Note: if you need to run with down-stream tasks such as graph FL, change the requirement/docker file name into another one when executing the above commands:
```
# enviroment/requirements-torch1.10.txt ->
requirements-torch1.10-application.txt
# enviroment/docker_files/federatedscope-torch1.10.Dockerfile ->
enviroment/docker_files/federatedscope-torch1.10-application.Dockerfile
```
Finally, after all the dependencies are installed, run:
```bash
python setup.py install
```

### Step 2. Prepare datasets

To run an FL task, users should prepare a dataset.
The DataZoo provided in FederatedScope can help to automatically download and preprocess widely-used public datasets for various FL applications, including CV, NLP, graph learning, recommendation, etc. Users can directly specify `cfg.data.type = DATASET_NAME`in the configuration. For example,

```bash
cfg.data.type = 'femnist'
```

To use customized datasets, you need to prepare the datasets following a certain format and register it. Please refer to [Customized Datasets](https://federatedscope.io/docs/own-case/#data) for more details.

### Step 3. Prepare models

Then, users should specify the model architecture that will be trained in the FL course.
FederatedScope provides a ModelZoo that contains the implementation of widely adopted model architectures for various FL applications. Users can set up `cfg.model.type = MODEL_NAME` to apply a specific model architecture in FL tasks. For example,

```yaml
cfg.model.type = 'convnet2'
```

FederatedScope allows users to use customized models via registering. Please refer to [Customized Models](https://federatedscope.io/docs/own-case/#model) for more details about how to customize a model architecture.

### Step 4. Start running an FL task

Note that FederatedScope provides a unified interface for both standalone mode and distributed mode, and allows users to change via configuring.

#### Standalone mode

The standalone mode in FederatedScope means to simulate multiple participants (servers and clients) in a single device, while participants' data are isolated from each other and their models might be shared via message passing.

Here we demonstrate how to run a standard FL task with FederatedScope, with setting `cfg.data.type = 'FEMNIST'`and `cfg.model.type = 'ConvNet2'` to run vanilla FedAvg for an image classification task. Users can customize training configurations, such as `cfg.federated.total_round_num`, `cfg.data.batch_size`, and `cfg.optimizer.lr`, in the configuration (a .yaml file), and run a standard FL task as:

```bash
# Run with default configurations
python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml
# Or with custom configurations
python federatedscope/main.py --cfg federatedscope/example_configs/femnist.yaml federated.total_round_num 50 data.batch_size 128
```

Then you can observe some monitored metrics during the training process as:

```
INFO: Server #0 has been set up ...
INFO: Model meta-info: <class 'federatedscope.cv.model.cnn.ConvNet2'>.
... ...
INFO: Client has been set up ...
INFO: Model meta-info: <class 'federatedscope.cv.model.cnn.ConvNet2'>.
... ...
INFO: {'Role': 'Client #5', 'Round': 0, 'Results_raw': {'train_loss': 207.6341676712036, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.152683353424072}}
INFO: {'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_loss': 209.0940284729004, 'train_acc': 0.02, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1818805694580075}}
INFO: {'Role': 'Client #8', 'Round': 0, 'Results_raw': {'train_loss': 202.24929332733154, 'train_acc': 0.04, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.0449858665466305}}
INFO: {'Role': 'Client #6', 'Round': 0, 'Results_raw': {'train_loss': 209.43883895874023, 'train_acc': 0.06, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1887767791748045}}
INFO: {'Role': 'Client #9', 'Round': 0, 'Results_raw': {'train_loss': 208.83140087127686, 'train_acc': 0.0, 'train_total': 50, 'train_loss_regular': 0.0, 'train_avg_loss': 4.1766280174255375}}
INFO: ----------- Starting a new training round (Round #1) -------------
... ...
INFO: Server #0: Training is finished! Starting evaluation.
INFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 163.029045
... ...
INFO: Server #0: Final evaluation is finished! Starting merging results.
... ...
```

#### Distributed mode

The distributed mode in FederatedScope denotes running multiple procedures to build up an FL course, where each procedure plays as a participant (server or client) that instantiates its model and loads its data. The communication between participants is already provided by the communication module of FederatedScope.

To run with distributed mode, you only need to:

- Prepare isolated data file and set up `cfg.distribute.data_file = PATH/TO/DATA` for each participant;
- Change `cfg.federate.model = 'distributed'`, and specify the role of each participant by `cfg.distributed.role = 'server'/'client'`.
- Set up a valid address by `cfg.distribute.host = x.x.x.x` and `cfg.distribute.port = xxxx`. (Note that for a server, you need to set up server_host/server_port for listening messge, while for a client, you need to set up client_host/client_port for listening and server_host/server_port for sending join-in applications when building up an FL course)

We prepare a synthetic example for running with distributed mode:

```bash
# For server
python main.py --cfg federatedscope/example_configs/distributed_server.yaml data_path 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx

# For clients
python main.py --cfg federatedscope/example_configs/distributed_client_1.yaml data_path 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx
python main.py --cfg federatedscope/example_configs/distributed_client_2.yaml data_path 'PATH/TO/DATA' distribute.server_host x.x.x.x distribute.server_port xxxx distribute.client_host x.x.x.x distribute.client_port xxxx
```

And you can observe the results as (the IP addresses are anonymized with 'x.x.x.x'):

```
INFO: Server #0: Listen to x.x.x.x:xxxx...
INFO: Server #0 has been set up ...
Model meta-info: <class 'federatedscope.core.lr.LogisticRegression'>.
... ...
INFO: Client: Listen to x.x.x.x:xxxx...
INFO: Client (address x.x.x.x:xxxx) has been set up ...
Client (address x.x.x.x:xxxx) is assigned with #1.
INFO: Model meta-info: <class 'federatedscope.core.lr.LogisticRegression'>.
... ...
{'Role': 'Client #2', 'Round': 0, 'Results_raw': {'train_avg_loss': 5.215108394622803, 'train_loss': 333.7669372558594, 'train_total': 64}}
{'Role': 'Client #1', 'Round': 0, 'Results_raw': {'train_total': 64, 'train_loss': 290.9668884277344, 'train_avg_loss': 4.54635763168335}}
----------- Starting a new training round (Round #1) -------------
... ...
INFO: Server #0: Training is finished! Starting evaluation.
INFO: Client #1: (Evaluation (test set) at Round #20) test_loss is 30.387419
... ...
INFO: Server #0: Final evaluation is finished! Starting merging results.
... ...
```


## Advanced

As a comprehensive FL platform, FederatedScope provides the fundamental implementation to support requirements of various FL applications and frontier studies, towards both convenient usage and flexible extension, including:

- **Personalized Federated Learning**: Client-specific model architectures and training configurations are applied to handle the non-IID issues caused by the diverse data distributions and heterogeneous system resources.
- **Federated Hyperparameter Optimization**: When hyperparameter optimization (HPO) comes to Federated Learning, each attempt is extremely costly due to multiple rounds of communication across participants. It is worth noting that HPO under the FL is unique and more techniques should be promoted such as low-fidelity HPO.
- **Privacy Attacker**: The privacy attack algorithms are important and convenient to verify the privacy protection strength of the design FL systems and algorithms, which is growing along with Federated Learning.
- **Graph Federated Learning**: Working on the ubiquitous graph data, Graph Federated Learning aims to exploit isolated sub-graph data to learn a global model, and has attracted increasing popularity.
- **Recommendation**: As a number of laws and regulations go into effect all over the world, more and more people are aware of the importance of privacy protection, which urges the recommender system to learn from user data in a privacy-preserving manner.
- **Differential Privacy**: Different from the encryption algorithms that require a large amount of computation resources, differential privacy is an economical yet flexible technique to protect privacy, which has achieved great success in database and is ever-growing in federated learning.
- ...

More supports are coming soon! We have prepared a [tutorial](https://federatedscope.io/) to provide more details about how to utilize FederatedScope to enjoy your journey of Federated Learning!

## Documentation

The classes and methods of FederatedScope have been well documented so that users can generate the API references by:

```shell
pip install -r requirements-doc.txt
make html
```

We put the API references on our [website](https://federatedscope.io/refs/index).

## License

FederatedScope is released under Apache License 2.0.

## Publications
If you find FederatedScope useful for your research or development, please cite the following <a href="https://arxiv.org/abs/2204.05011" target="_blank">paper</a>:
```
@article{federatedscope,
title = {FederatedScope: A Flexible Federated Learning Platform for Heterogeneity},
author = {Xie, Yuexiang and Wang, Zhen and Chen, Daoyuan and Gao, Dawei and Yao, Liuyi and Kuang, Weirui and Li, Yaliang and Ding, Bolin and Zhou, Jingren},
journal={arXiv preprint arXiv:2204.05011},
year = {2022},
}
```
More publications can be found in the [Publications](https://federatedscope.io/year-archive/).

## Contributing

We **greatly appreciate** any contribution to FederatedScope! You can refer to [Contributing to FederatedScope](https://federatedscope.io/docs/contributor/) for more details.

110 changes: 110 additions & 0 deletions demo/bbo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
"""This python script is provided to demonstrate the interaction between emukit and FederatedScope.
Specifically, we apply Black-Box Optimization (BBO) to search the optimal hyperparameters of the considered federated learning algorithms.
emukit can be installed by `pip install emukit`
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors

from emukit.test_functions import forrester_function
from emukit.core import ContinuousParameter, CategoricalParameter, ParameterSpace
from emukit.examples.gp_bayesian_optimization.single_objective_bayesian_optimization import GPBayesianOptimization

### --- Figure config
LEGEND_SIZE = 15


def eval_fl_algo(x):
from federatedscope.core.cmd_args import parse_args
from federatedscope.core.auxiliaries.data_builder import get_data
from federatedscope.core.auxiliaries.utils import setup_seed, update_logger
from federatedscope.core.auxiliaries.worker_builder import get_client_cls, get_server_cls
from federatedscope.core.configs.config import global_cfg
from federatedscope.core.fed_runner import FedRunner

init_cfg = global_cfg.clone()
init_cfg.merge_from_file(
"federatedscope/example_configs/single_process.yaml")
init_cfg.merge_from_list(["optimizer.lr", float(x[0])])

update_logger(init_cfg, True)
setup_seed(init_cfg.seed)

# federated dataset might change the number of clients
# thus, we allow the creation procedure of dataset to modify the global cfg object
data, modified_cfg = get_data(config=init_cfg.clone())
init_cfg.merge_from_other_cfg(modified_cfg)

init_cfg.freeze()

runner = FedRunner(data=data,
server_class=get_server_cls(init_cfg),
client_class=get_client_cls(init_cfg),
config=init_cfg.clone())
results = runner.run()

# so that we could modify cfg in the next trial
init_cfg.defrost()

return [results['client_summarized_weighted_avg']['test_avg_loss']]


def our_target_func(x):
return np.asarray([eval_fl_algo(elem) for elem in x])


def main():
#target_function, space = forrester_function()
target_function = our_target_func
space = ParameterSpace([ContinuousParameter('lr', 1e-4, .75)])
x_plot = np.linspace(space.parameters[0].min, space.parameters[0].max,
200)[:, None]
#y_plot = target_function(x_plot)
X_init = np.array([[0.005], [0.05], [0.5]])
Y_init = target_function(X_init)

bo = GPBayesianOptimization(variables_list=space.parameters,
X=X_init,
Y=Y_init)
bo.run_optimization(target_function, 15)

mu_plot, var_plot = bo.model.predict(x_plot)

plt.figure(figsize=(12, 8))
plt.plot(bo.loop_state.X,
bo.loop_state.Y,
"ro",
markersize=10,
label="Observations")
#plt.plot(x_plot, y_plot, "k", label="Objective Function")
#plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0],
color="C0",
alpha=0.6)

plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0],
color="C0",
alpha=0.4)

plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0],
color="C0",
alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 0.75)

#plt.show()
plt.savefig("bbo.pdf", bbox_inches='tight')
plt.close()


if __name__ == "__main__":
main()
Loading

0 comments on commit dd5e50f

Please sign in to comment.