Coding-Enforced Robust Secure Aggregation for Federated Learning Under Unreliable Communication
Authors
Shudi Weng, Chao Ren, Yizhou Zhao, Ming Xiao, Mikael Skoglund
Abstract
This work studies privacy-preserving federated learning (ppFL) under unreliable communication. In ppFL, zero-sum privacy noises enables privacy protection without sacrificing model accuracy, effectively overcoming the privacy-utility trade-off. However, in practice, unreliable communication can randomly disrupt the coordination of zero-sum noises, leading to aggregation errors and unpredictable partial participation, which severely harm the model accuracy and learning performance. To overcome these challenges, we propose a robust coding-enforced structured secure aggregation method, termed secure cooperative gradient coding (SecCoGC), which enables exact reconstruction of the global model under unreliable communication while allowing for arbitrarily strong privacy preservation. In this paper, a complete problem formulation and constructions of real-field zero-sum privacy noise are presented, and fairness is introduced as a privacy metric. Privacy across all protocol layers in SecCoGC is evaluated, accounting for the correlation among privacy noises and their linear combination under unreliable communication. Moreover, a distinct convergence analysis for the FL algorithm with a binary outcome for global model recovery is provided. Experimental results demonstrate that SecCoGC achieves strong resilience to unreliable communication while maintaining varying levels of privacy preservation, yielding test accuracy improvements of up to 20%-70% over existing benchmark methods.