Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.18 KB

README.md

File metadata and controls

5 lines (3 loc) · 1.18 KB

DFL-Secure-Aggregation

Federated learning (FL) enables a collaborative environment for training machine learning models without sharing training data between users. This is typically achieved by aggregating model gradients on a central server. Decentralized federated learning is a rising paradigm that enables users to collaboratively train machine learning models in a peer-to-peer manner, without the need for a central aggregation server. However, before applying decentralized FL in real-world use cases, nodes who deviate from the FL process (Byzantine nodes) must be taken into account. Recent research has focused on Byzantine-robustness for client-server or fully connected network topologies, while ignoring the complexity of network configurations possible with decentralized FL. Thus, the need for empirical evidence of Byzantine-robustness in decentralized FL networks is necessary.

We investigate the effects of state-of-the-art Byzantine-robust aggregation methods in complex, large scale network structures. Our findings show that state-of-the-art Byzantine robust aggregation strategies are not resilient to Byzantine agents embedded within large networks which are not fully-connected.