Supplementary File: Coded Cooperative Networks for Semi-Decentralized Federated Learning
Shudi Weng, Ming Xiao, Chao Ren, Mikael Skoglund
TL;DR
The paper tackles FL performance degradation due to communication stragglers in wireless settings by introducing a deterministic coded cooperative network that enables semi-decentralized training without requiring global network knowledge. It leverages diversity through MDS-based network coding (DNC) to ensure that PS can recover a meaningful subset of client updates despite intermittent links, and it provides outage and convergence analyses to support robustness. The approach is validated on MNIST, showing the proposed scheme can match perfect-link FL in both i.i.d. and non-i.i.d. data scenarios while outperforming variants that rely on prior network information or lack coding diversity. This work offers a scalable, practically deployable mechanism to mitigate communication bottlenecks in distributed FL over wireless networks and suggests applicability to large-scale models beyond MNIST, including potential LLM training scenarios.
Abstract
To enhance straggler resilience in federated learning (FL) systems, a semi-decentralized approach has been recently proposed, enabling collaboration between clients. Unlike the existing semi-decentralized schemes, which adaptively adjust the collaboration weight according to the network topology, this letter proposes a deterministic coded network that leverages wireless diversity for semi-decentralized FL without requiring prior information about the entire network. Furthermore, the theoretical analyses of the outage and the convergence rate of the proposed scheme are provided. Finally, the superiority of our proposed method over benchmark methods is demonstrated through comprehensive simulations.
