Table of Contents
Fetching ...

SecureAFL: Secure Asynchronous Federated Learning

Anjun Gao, Feng Wang, Zhenglin Wan, Yueyang Quan, Zhuqing Liu, Minghong Fang

Abstract

Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model via a server without sharing their private training data. In traditional FL, the system follows a synchronous approach, where the server waits for model updates from numerous clients before aggregating them to update the global model. However, synchronous FL is hindered by the straggler problem. To address this, the asynchronous FL architecture allows the server to update the global model immediately upon receiving any client's local model update. Despite its advantages, the decentralized nature of asynchronous FL makes it vulnerable to poisoning attacks. Several defenses tailored for asynchronous FL have been proposed, but these mechanisms remain susceptible to advanced attacks or rely on unrealistic server assumptions. In this paper, we introduce SecureAFL, an innovative framework designed to secure asynchronous FL against poisoning attacks. SecureAFL improves the robustness of asynchronous FL by detecting and discarding anomalous updates while estimating the contributions of missing clients. Additionally, it utilizes Byzantine-robust aggregation techniques, such as coordinate-wise median, to integrate the received and estimated updates. Extensive experiments on various real-world datasets demonstrate the effectiveness of SecureAFL.

SecureAFL: Secure Asynchronous Federated Learning

Abstract

Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model via a server without sharing their private training data. In traditional FL, the system follows a synchronous approach, where the server waits for model updates from numerous clients before aggregating them to update the global model. However, synchronous FL is hindered by the straggler problem. To address this, the asynchronous FL architecture allows the server to update the global model immediately upon receiving any client's local model update. Despite its advantages, the decentralized nature of asynchronous FL makes it vulnerable to poisoning attacks. Several defenses tailored for asynchronous FL have been proposed, but these mechanisms remain susceptible to advanced attacks or rely on unrealistic server assumptions. In this paper, we introduce SecureAFL, an innovative framework designed to secure asynchronous FL against poisoning attacks. SecureAFL improves the robustness of asynchronous FL by detecting and discarding anomalous updates while estimating the contributions of missing clients. Additionally, it utilizes Byzantine-robust aggregation techniques, such as coordinate-wise median, to integrate the received and estimated updates. Extensive experiments on various real-world datasets demonstrate the effectiveness of SecureAFL.

Paper Structure

This paper contains 29 sections, 5 theorems, 31 equations, 4 figures, 8 tables, 3 algorithms.

Key Result

Theorem 1

Let Assumptions ass:smooth--ass:heterogeneity hold. Suppose the stepsize satisfies Then for any $T\ge 1$, where $E_{\mathrm{track}}^2$ can be chosen as for an absolute constant $C_{\mathrm{med}}>0$ that depends only on the coordinate-wise median bound used in the analysis. $\blacktriangleleft$$\blacktriangleleft$

Figures (4)

  • Figure 1: Impact of fraction of malicious clients on Fashion-MNIST dataset.
  • Figure 2: Impact of client delay on Fashion-MNIST dataset.
  • Figure 3: Impact of degree of Non-IID on Fashion-MNIST dataset.
  • Figure 4: Impact of total number of clients on Fashion-MNIST dataset.

Theorems & Definitions (7)

  • Theorem 1: Convergence of SecureAFL under bounded tracking error
  • corollary 1: Diminishing stepsize
  • Remark
  • Remark
  • Lemma 1
  • Lemma 2
  • Lemma 3: Median aggregation as a bounded tracking error