Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix name for hpobench #474

Merged
merged 3 commits into from
Dec 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You can try FederatedScope via [FederatedScope Playground](https://try.federated
- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png) [08-18-2022] Our KDD 2022 [paper](https://arxiv.org/abs/2204.05562) on federated graph learning receives the KDD Best Paper Award for ADS track!
- ![new](https://img.alicdn.com/imgextra/i4/O1CN01kUiDtl1HVxN6G56vN_!!6000000000764-2-tps-43-19.png) [07-30-2022] We release FederatedScope v0.2.0!
- [06-17-2022] We release **pFL-Bench**, a comprehensive benchmark for personalized Federated Learning (pFL), containing 10+ datasets and 20+ baselines. [[code](https://github.com/alibaba/FederatedScope/tree/master/benchmark/pFL-Bench), [pdf](https://arxiv.org/abs/2206.03655)]
- [06-17-2022] We release **FedHPO-B**, a benchmark suite for studying federated hyperparameter optimization. [[code](https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB), [pdf](https://arxiv.org/abs/2206.03966)]
- [06-17-2022] We release **FedHPO-Bench**, a benchmark suite for studying federated hyperparameter optimization. [[code](https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOBench), [pdf](https://arxiv.org/abs/2206.03966)]
- [06-17-2022] We release **B-FHTL**, a benchmark suit for studying federated hetero-task learning. [[code](https://github.com/alibaba/FederatedScope/tree/master/benchmark/B-FHTL), [pdf](https://arxiv.org/abs/2206.03436)]
- [06-13-2022] Our project was receiving an attack, which has been resolved. [More details](https://github.com/alibaba/FederatedScope/blob/master/doc/news/06-13-2022_Declaration_of_Emergency.txt).
- [05-25-2022] Our paper [FederatedScope-GNN](https://arxiv.org/abs/2204.05562) has been accepted by KDD'2022!
Expand Down
24 changes: 12 additions & 12 deletions benchmark/FedHPOBench/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# FedHPO-B
# FedHPO-Bench

A benchmark suite for studying federated hyperparameter optimization. FedHPO-B incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions. We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.
A benchmark suite for studying federated hyperparameter optimization. FedHPO-Bench incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions. We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.

## Quick Start

We highly recommend running FedHPO-B with conda.
We highly recommend running FedHPO-Bench with conda.

### Step 0. Dependency

* FedHPO-B is built on a stable [FederatedScope](https://github.com/alibaba/FederatedScope), please see [Installation](https://github.com/alibaba/FederatedScope#step-1-installation) for install FederatedScope.
* FedHPO-Bench is built on a stable [FederatedScope](https://github.com/alibaba/FederatedScope), please see [Installation](https://github.com/alibaba/FederatedScope#step-1-installation) for install FederatedScope.

```bash
git clone https://github.com/alibaba/FederatedScope.git
Expand All @@ -33,7 +33,7 @@ We highly recommend running FedHPO-B with conda.

### Step 1. Installation

We recommend installing FedHPOB directly using git by:
We recommend installing FedHPOBench directly using git by:

```bash
git clone https://github.com/alibaba/FederatedScope.git
Expand All @@ -43,9 +43,9 @@ export PYTHONPATH=~/FedHPOBench:$PYTHONPATH

### Step 2. Prepare data files

**Note**: If you only want to use FedHPO-B with raw mode, you can skip to **Step3**.
**Note**: If you only want to use FedHPO-Bench with raw mode, you can skip to **Step3**.

All data files are available on AliyunOSS, you need to download the data files and place them in the `~/data/tabular_data/` or `~/data/surrogate_model/` before using FedHPO-B.
All data files are available on AliyunOSS, you need to download the data files and place them in the `~/data/tabular_data/` or `~/data/surrogate_model/` before using FedHPO-Bench.

The naming pattern of the url of data files obeys the rule:

Expand All @@ -62,8 +62,8 @@ Fortunately, we provide tools to automatically convert from tabular data to surr
### Step3. Start running

```python
from fedhpob.config import fhb_cfg
from fedhpob.benchmarks import TabularBenchmark
from fedhpobench.config import fhb_cfg
from fedhpobench.benchmarks import TabularBenchmark

benchmark = TabularBenchmark('cnn', 'femnist', 'avg')

Expand Down Expand Up @@ -99,7 +99,7 @@ We take Figure 11 as an example.
* Then draw the figure with tools we provide, the figures will be saved in `~/figures`.

```python
from fedhpob.utils.draw import rank_over_time
from fedhpobench.utils.draw import rank_over_time

rank_over_time('exp_results', 'gcn', algo='avg', loss=False)
```
Expand Down Expand Up @@ -130,11 +130,11 @@ How to use:

## Publications

If you find FedHPO-B useful for your research or development, please cite the following [paper](https://arxiv.org/abs/2206.03966):
If you find FedHPO-Bench useful for your research or development, please cite the following [paper](https://arxiv.org/abs/2206.03966):

```tex
@article{Wang2022FedHPOBAB,
title={FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization},
title={FedHPO-Bench: A Benchmark Suite for Federated Hyperparameter Optimization},
author={Zhen Wang and Weirui Kuang and Ce Zhang and Bolin Ding and Yaliang Li},
journal={ArXiv},
year={2022},
Expand Down
2 changes: 1 addition & 1 deletion benchmark/FedHPOBench/demo/femnist_surrogate_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def get_meta_information() -> Dict:
'journal = {arXiv preprint arXiv:2206.03966},'
'year = {2022}}', 'https://arxiv.org/pdf/2206.03966v4.pdf',
'https://github.com/alibaba/FederatedScope/tree/master'
'/benchmark/FedHPOB'
'/benchmark/FedHPOBench'
],
'code': 'https://github.com/alibaba/FederatedScope/tree/master'
'/benchmark/FedHPOBench',
Expand Down
4 changes: 2 additions & 2 deletions benchmark/FedHPOBench/demo/femnist_tabular_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def __init__(self,
Optimization",
url: https://arxiv.org/pdf/2206.03966v4.pdf
Source: https://github.com/alibaba/FederatedScope/tree/master
/benchmark/FedHPOB
/benchmark/FedHPOBench
Parameters
----------
data_path : str, Path
Expand Down Expand Up @@ -128,7 +128,7 @@ def get_meta_information() -> Dict:
'journal = {arXiv preprint arXiv:2206.03966},'
'year = {2022}}', 'https://arxiv.org/pdf/2206.03966v4.pdf',
'https://github.com/alibaba/FederatedScope/tree/master'
'/benchmark/FedHPOB'
'/benchmark/FedHPOBench'
],
'code': 'https://github.com/alibaba/FederatedScope/tree/master'
'/benchmark/FedHPOBench',
Expand Down
6 changes: 3 additions & 3 deletions benchmark/FedHPOBench/fedhpobench/benchmarks/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
from fedhpob.benchmarks.raw_benchmark import RawBenchmark
from fedhpob.benchmarks.tabular_benchmark import TabularBenchmark
from fedhpob.benchmarks.surrogate_benchmark import SurrogateBenchmark
from fedhpobench.benchmarks.raw_benchmark import RawBenchmark
from fedhpobench.benchmarks.tabular_benchmark import TabularBenchmark
from fedhpobench.benchmarks.surrogate_benchmark import SurrogateBenchmark

__all__ = ['RawBenchmark', 'TabularBenchmark', 'SurrogateBenchmark']
10 changes: 5 additions & 5 deletions benchmark/FedHPOBench/fedhpobench/benchmarks/base_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
import numpy as np
from federatedscope.core.configs.config import global_cfg
from federatedscope.core.auxiliaries.data_builder import get_data
from fedhpob.utils.tabular_dataloader import load_data
from fedhpob.utils.util import disable_fs_logger
from fedhpob.utils.cost_model import get_cost_model
from fedhpobench.utils.tabular_dataloader import load_data
from fedhpobench.utils.util import disable_fs_logger
from fedhpobench.utils.cost_model import get_cost_model


class BaseBenchmark(abc.ABC):
Expand Down Expand Up @@ -79,14 +79,14 @@ def get_lamba_from_df(self, configuration, fidelity):
filterd_result['eval_time'])
return c.total_seconds() / float(client_num)
else:
from fedhpob.config import fhb_cfg
from fedhpobench.config import fhb_cfg
return fhb_cfg.cost.c

def _cost(self, configuration, fidelity, **kwargs):
try:
kwargs['const'] = self.get_lamba_from_df(configuration, fidelity)
except:
from fedhpob.config import fhb_cfg
from fedhpobench.config import fhb_cfg
kwargs['const'] = fhb_cfg.cost.c
cost_model = get_cost_model(mode=self.cost_mode)
t = cost_model(self.cfg, configuration, fidelity, self.data, **kwargs)
Expand Down
6 changes: 3 additions & 3 deletions benchmark/FedHPOBench/fedhpobench/benchmarks/raw_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
get_server_cls
from federatedscope.core.fed_runner import FedRunner

from fedhpob.benchmarks.base_benchmark import BaseBenchmark
from fedhpob.utils.util import disable_fs_logger
from fedhpob.utils.cost_model import merge_cfg
from fedhpobench.benchmarks.base_benchmark import BaseBenchmark
from fedhpobench.utils.util import disable_fs_logger
from fedhpobench.utils.cost_model import merge_cfg


class RawBenchmark(BaseBenchmark):
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import os

from fedhpob.benchmarks.base_benchmark import BaseBenchmark
from fedhpob.utils.surrogate_dataloader import build_surrogate_model, \
from fedhpobench.benchmarks.base_benchmark import BaseBenchmark
from fedhpobench.utils.surrogate_dataloader import build_surrogate_model, \
load_surrogate_model


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
import logging
import numpy as np

from fedhpob.utils.util import dict2cfg
from fedhpob.utils.tabular_dataloader import load_data
from fedhpob.benchmarks.base_benchmark import BaseBenchmark
from fedhpobench.utils.util import dict2cfg
from fedhpobench.utils.tabular_dataloader import load_data
from fedhpobench.benchmarks.base_benchmark import BaseBenchmark


class TabularBenchmark(BaseBenchmark):
Expand Down
6 changes: 3 additions & 3 deletions benchmark/FedHPOBench/fedhpobench/config.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import ConfigSpace as CS
from federatedscope.core.configs.config import CN
from fedhpob.benchmarks import TabularBenchmark
from fedhpob.benchmarks import RawBenchmark
from fedhpob.benchmarks import SurrogateBenchmark
from fedhpobench.benchmarks import TabularBenchmark
from fedhpobench.benchmarks import RawBenchmark
from fedhpobench.benchmarks import SurrogateBenchmark

fhb_cfg = CN()

Expand Down
10 changes: 5 additions & 5 deletions benchmark/FedHPOBench/fedhpobench/optimizers/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
from fedhpob.optimizers.dehb_optimizer import run_dehb
from fedhpob.optimizers.hpbandster_optimizer import run_hpbandster
from fedhpob.optimizers.optuna_optimizer import run_optuna
from fedhpob.optimizers.smac_optimizer import run_smac
from fedhpob.optimizers.grid_search import run_grid_search
from fedhpobench.optimizers.dehb_optimizer import run_dehb
from fedhpobench.optimizers.hpbandster_optimizer import run_hpbandster
from fedhpobench.optimizers.optuna_optimizer import run_optuna
from fedhpobench.optimizers.smac_optimizer import run_smac
from fedhpobench.optimizers.grid_search import run_grid_search

__all__ = [
'run_dehb', 'run_hpbandster', 'run_optuna', 'run_smac', 'run_grid_search'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@
import random
import logging
from dehb.optimizers import DE, DEHB
from fedhpob.config import fhb_cfg
from fedhpob.utils.monitor import Monitor
from fedhpobench.config import fhb_cfg
from fedhpobench.utils.monitor import Monitor

logging.basicConfig(level=logging.WARNING)

Expand Down
4 changes: 2 additions & 2 deletions benchmark/FedHPOBench/fedhpobench/optimizers/grid_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@
import ConfigSpace as CS
from ConfigSpace.util import generate_grid

from fedhpob.config import fhb_cfg
from fedhpob.utils.monitor import Monitor
from fedhpobench.config import fhb_cfg
from fedhpobench.utils.monitor import Monitor

logging.basicConfig(level=logging.WARNING)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
from hpbandster.core.worker import Worker
from hpbandster.optimizers import BOHB, HyperBand, RandomSearch

from fedhpob.config import fhb_cfg
from fedhpob.utils.monitor import Monitor
from fedhpobench.config import fhb_cfg
from fedhpobench.utils.monitor import Monitor

logging.basicConfig(level=logging.WARNING)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
from optuna.samplers import TPESampler
from optuna.trial import Trial

from fedhpob.config import fhb_cfg
from fedhpob.utils.monitor import Monitor
from fedhpobench.config import fhb_cfg
from fedhpobench.utils.monitor import Monitor

logging.basicConfig(level=logging.WARNING)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
from smac.facade.smac_hpo_facade import SMAC4HPO
from smac.scenario.scenario import Scenario

from fedhpob.config import fhb_cfg
from fedhpob.utils.monitor import Monitor
from fedhpobench.config import fhb_cfg
from fedhpobench.utils.monitor import Monitor

logging.basicConfig(level=logging.WARNING)

Expand Down
6 changes: 3 additions & 3 deletions benchmark/FedHPOBench/fedhpobench/utils/draw.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def logloader(file):


def ecdf(model, data_list, algo, sample_client=None, key='test_acc'):
from fedhpob.benchmarks import TabularBenchmark
from fedhpobench.benchmarks import TabularBenchmark

# Draw ECDF from target data_list
plt.figure(figsize=(10, 7.5))
Expand Down Expand Up @@ -294,8 +294,8 @@ def landscape(model='cnn',
sample_client=None,
key='test_acc'):
import plotly.graph_objects as go
from fedhpob.config import fhb_cfg
from fedhpob.benchmarks import TabularBenchmark
from fedhpobench.config import fhb_cfg
from fedhpobench.benchmarks import TabularBenchmark

z = []
benchmark = TabularBenchmark(model, dname, algo, device=-1)
Expand Down
2 changes: 1 addition & 1 deletion benchmark/FedHPOBench/fedhpobench/utils/monitor.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

import numpy as np

from fedhpob.utils.util import cfg2name
from fedhpobench.utils.util import cfg2name

logging.basicConfig(level=logging.WARNING)

Expand Down
4 changes: 2 additions & 2 deletions benchmark/FedHPOBench/fedhpobench/utils/runner.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from federatedscope.core.cmd_args import parse_args
from fedhpob.config import fhb_cfg, add_configs
from fedhpob.optimizers import run_dehb, run_hpbandster, run_optuna, \
from fedhpobench.config import fhb_cfg, add_configs
from fedhpobench.optimizers import run_dehb, run_hpbandster, run_optuna, \
run_smac, run_grid_search


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
from sklearn.model_selection import cross_validate as sk_cross_validate
from tqdm import tqdm

from fedhpob.utils.tabular_dataloader import load_data
from fedhpobench.utils.tabular_dataloader import load_data


def sampling(X, Y, over_rate=1, down_rate=1.0, cvg_score=0.5):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ do
do
for k in {1..3}
do
python federatedscope/main.py --cfg benchmark/FedHPOB/scripts/lr/twitter.yaml device $cudaid train.optimizer.lr $lr train.optimizer.weight_decay ${wds[$w]} train.local_update_steps ${steps[$s]} data.batch_size ${batch_sizes[$b]} federate.sample_client_rate ${sample_rates[$sr]} seed $k outdir lr/${out_dir}_${sample_rates[$sr]} expname lr${lr}_wd${wds[$w]}_dropout0_step${steps[$s]}_batch${batch_sizes[$b]}_seed${k}
python federatedscope/main.py --cfg benchmark/FedHPOBench/scripts/lr/twitter.yaml device $cudaid train.optimizer.lr $lr train.optimizer.weight_decay ${wds[$w]} train.local_update_steps ${steps[$s]} data.batch_size ${batch_sizes[$b]} federate.sample_client_rate ${sample_rates[$sr]} seed $k outdir lr/${out_dir}_${sample_rates[$sr]} expname lr${lr}_wd${wds[$w]}_dropout0_step${steps[$s]}_batch${batch_sizes[$b]}_seed${k}
done
done
done
Expand Down
2 changes: 1 addition & 1 deletion benchmark/FedHPOBench/scripts/exp/run_mode.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ device=$4
algo=$5

cd ../..
cp fedhpob/utils/runner.py . || echo "File exists."
cp fedhpobench/utils/runner.py . || echo "File exists."

for k in {1..5}; do
python runner.py --cfg scripts/exp/${dataset}.yaml benchmark.device ${device} benchmark.model ${model} benchmark.type ${mode} benchmark.data ${dataset} benchmark.algo ${algo} optimizer.type rs || echo "continue"
Expand Down
2 changes: 1 addition & 1 deletion benchmark/FedHPOBench/scripts/gcn/run_hpo_cora_dp.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ for ((l = 0; l < ${#lrs[@]}; l++)); do
for ((d = 0; d < ${#dps[@]}; d++)); do
for ((s = 0; s < ${#steps[@]}; s++)); do
for k in {1..3}; do
python federatedscope/main.py --cfg benchmark/FedHPOB/scripts/gcn/cora_dp.yaml device $cudaid train.optimizer.lr ${lrs[$l]} train.optimizer.weight_decay ${wds[$w]} model.dropout ${dps[$d]} train.local_update_steps ${steps[$s]} federate.sample_client_num $sample_num seed $k outdir ${out_dir}/${sample_num} expname lr${lrs[$l]}_wd${wds[$w]}_dropout${dps[$d]}_step${steps[$s]}_seed${k} >/dev/null 2>&1
python federatedscope/main.py --cfg benchmark/FedHPOBench/scripts/gcn/cora_dp.yaml device $cudaid train.optimizer.lr ${lrs[$l]} train.optimizer.weight_decay ${wds[$w]} model.dropout ${dps[$d]} train.local_update_steps ${steps[$s]} federate.sample_client_num $sample_num seed $k outdir ${out_dir}/${sample_num} expname lr${lrs[$l]}_wd${wds[$w]}_dropout${dps[$d]}_step${steps[$s]}_seed${k} >/dev/null 2>&1
done
done
done
Expand Down
2 changes: 1 addition & 1 deletion benchmark/FedHPOBench/scripts/gcn/run_prox_cora.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ for ((l = 0; l < ${#lrs[@]}; l++)); do
for ((d = 0; d < ${#dps[@]}; d++)); do
for ((s = 0; s < ${#steps[@]}; s++)); do
for k in {1..3}; do
python federatedscope/main.py --cfg benchmark/FedHPOB/scripts/gcn/cora_prox.yaml device $cudaid train.optimizer.lr ${lrs[$l]} fedprox.use True fedprox.mu ${mu} train.optimizer.weight_decay ${wds[$w]} model.dropout ${dps[$d]} train.local_update_steps ${steps[$s]} federate.sample_client_num $sample_num seed $k outdir ${out_dir}/${sample_num} expname lr${lrs[$l]}_wd${wds[$w]}_dropout${dps[$d]}_step${steps[$s]}_mu${mu}_seed${k} >/dev/null 2>&1
python federatedscope/main.py --cfg benchmark/FedHPOBench/scripts/gcn/cora_prox.yaml device $cudaid train.optimizer.lr ${lrs[$l]} fedprox.use True fedprox.mu ${mu} train.optimizer.weight_decay ${wds[$w]} model.dropout ${dps[$d]} train.local_update_steps ${steps[$s]} federate.sample_client_num $sample_num seed $k outdir ${out_dir}/${sample_num} expname lr${lrs[$l]}_wd${wds[$w]}_dropout${dps[$d]}_step${steps[$s]}_mu${mu}_seed${k} >/dev/null 2>&1
done
done
done
Expand Down