Table of Contents
Fetching ...

Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation

Mahtab Talaei, Iman Izadi

TL;DR

A personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties is proposed to efficiently preserve privacy.

Abstract

Federated learning (FL), a novel branch of distributed machine learning (ML), develops global models through a private procedure without direct access to local datasets. However, it is still possible to access the model updates (gradient updates of deep neural networks) transferred between clients and servers, potentially revealing sensitive local information to adversaries using model inversion attacks. Differential privacy (DP) offers a promising approach to addressing this issue by adding noise to the parameters. On the other hand, heterogeneities in data structure, storage, communication, and computational capabilities of devices can cause convergence problems and delays in developing the global model. A personalized weighted averaging of local parameters based on the resources of each device can yield a better aggregated model in each round. In this paper, to efficiently preserve privacy, we propose a personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties. To fulfill the DP requirements, we first analyze the convergence boundary of the FL algorithm when impact factors are personalized and fixed throughout the learning process. We then further study the convergence property considering time-varying (adaptive) impact factors.

Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation

TL;DR

A personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties is proposed to efficiently preserve privacy.

Abstract

Federated learning (FL), a novel branch of distributed machine learning (ML), develops global models through a private procedure without direct access to local datasets. However, it is still possible to access the model updates (gradient updates of deep neural networks) transferred between clients and servers, potentially revealing sensitive local information to adversaries using model inversion attacks. Differential privacy (DP) offers a promising approach to addressing this issue by adding noise to the parameters. On the other hand, heterogeneities in data structure, storage, communication, and computational capabilities of devices can cause convergence problems and delays in developing the global model. A personalized weighted averaging of local parameters based on the resources of each device can yield a better aggregated model in each round. In this paper, to efficiently preserve privacy, we propose a personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties. To fulfill the DP requirements, we first analyze the convergence boundary of the FL algorithm when impact factors are personalized and fixed throughout the learning process. We then further study the convergence property considering time-varying (adaptive) impact factors.

Paper Structure

This paper contains 13 sections, 6 theorems, 77 equations, 1 figure, 1 table.

Key Result

Theorem 1

Considering $T$ as the aggregation times and the maximum revelations in the broadcasting channels, the SD of the server-side noise is given by

Figures (1)

  • Figure 1: A FL training model.

Theorems & Definitions (13)

  • Definition 1: $(\epsilon, \delta)$-DP
  • Theorem 1: server-side DP
  • proof
  • Lemma 1: $A$-local dissimilarity
  • proof
  • Lemma 2: Per-iteration expected increment
  • proof
  • Theorem 2: Convergence upper bound of personalized ....
  • proof
  • Lemma 3: Per-iteration expected increment: Extension
  • ...and 3 more