Implemented based on Tensorflow
-
The copyright of the code in this project belongs to the authors of the Dap-FL paper and can only be used for academic research. If you need to quote or reprint, please indicate the source and the original link.
-
The final interpretation right of this copyright and disclaimer belongs to the authors.
- primal-dual ddpg from Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning
define the basic ddpg model with an actor, a critic and a cost
execute FL training in dap
FL with fixed hyper-paraneters, i.e., FedAvg
model inversation accacks
-
Logistic on MNIST
-
CNN on MNIST
-
ResNet on MNIST
-
CNN on Fashion-MNIST
-
CNN on FEMNIST
-
Dap-FL. (defined in ddpg_enhanced_second_train.py)
-
Large. (defined in baseline_second_train.py)
-
Small. (defined in baseline_second_train.py)
-
DDPG-
$\eta$ . (defined in ddpg_enhanced_second_train.py by setting fixed training epoch) -
DDPG-
$\alpha$ . (defined in ddpg_enhanced_second_train.py by setting fixed learning rate) -
DDPG-client. (see https://ieeexplore.ieee.org/abstract/document/9372789/)
-
DQN. (see https://ieeexplore.ieee.org/abstract/document/9244624/)