You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello teacher, thank you for open-source code.
I am a new student in the field of federated learning and have been replicating your work recently.
The data processing parameters I used when using the CIFAR10 dataset are as follows:
--n_tasks 50
--n_components -1
--alpha 0.4
--s_frac 1.0
--tr_frac 0.8
--seed 12345
Then use Wandb for training, with the following parameters:
fine_grained_block_split: values: [5]
sparse_factor_scheduler: values: [ constant ]
bz: values: [128]
Model_type: values: [cnn] # Use CNN model
Optimizer: values: [sgd, Adam] # Add Adam as optimizer option
lr_scheduler: values: [ reduce_on_plateau_40, multi_step, reduce_on_plateau]
block_wise_prune: value: 1
n_rounds: values: [400 ]
local_steps: values: [ 1]
Sparse_factor: values: [0.5] # Sparse factor set for CIFAR-10
Lr_madel: values: [0.01, 0.05, 0.1, 0.3] # Learning rate adjustment
lr_gating: values: [0.01, 0.05, 0.1, 0.3]
However, according to the results of node_agg/test/metric in Wandb, the training effect is getting worse and worse.
May I ask, teacher, at which stage may the problem have occurred in this situation?
In addition, during the training process, what indicators should I pay attention to on the client side?
Thank you for your reply.
The text was updated successfully, but these errors were encountered:
Hello teacher, thank you for open-source code.
I am a new student in the field of federated learning and have been replicating your work recently.
The data processing parameters I used when using the CIFAR10 dataset are as follows:
--n_tasks 50
--n_components -1
--alpha 0.4
--s_frac 1.0
--tr_frac 0.8
--seed 12345
Then use Wandb for training, with the following parameters:
fine_grained_block_split: values: [5]
sparse_factor_scheduler: values: [ constant ]
bz: values: [128]
Model_type: values: [cnn] # Use CNN model
Optimizer: values: [sgd, Adam] # Add Adam as optimizer option
lr_scheduler: values: [ reduce_on_plateau_40, multi_step, reduce_on_plateau]
block_wise_prune: value: 1
n_rounds: values: [400 ]
local_steps: values: [ 1]
Sparse_factor: values: [0.5] # Sparse factor set for CIFAR-10
Lr_madel: values: [0.01, 0.05, 0.1, 0.3] # Learning rate adjustment
lr_gating: values: [0.01, 0.05, 0.1, 0.3]
However, according to the results of node_agg/test/metric in Wandb, the training effect is getting worse and worse.
May I ask, teacher, at which stage may the problem have occurred in this situation?
In addition, during the training process, what indicators should I pay attention to on the client side?
Thank you for your reply.
The text was updated successfully, but these errors were encountered: