Table of Contents
Fetching ...

Federated Graph Semantic and Structural Learning

Wenke Huang, Guancheng Wan, Mang Ye, Bo Du

TL;DR

This work tackles non-IID challenges in federated graph learning by decoupling heterogeneity into node-level semantics and graph-level structure. It introduces FGSSL, comprising Federated Node Semantic Contrast (FNSC) to align local node representations with global class-consistent signals, and Federated Graph Structure Distillation (FGSD) to distill global neighborhood similarity into the local models, preserving structural information while maintaining discriminability. The approach shows consistent improvements over strong federated baselines on three graph benchmarks, with ablations confirming the value of both components. By leveraging the global model for calibration during local updates, FGSSL achieves better generalization without added communication rounds, offering a practical path for robust federated graph learning in heterogeneous environments.

Abstract

Federated graph learning collaboratively learns a global graph neural network with distributed graphs, where the non-independent and identically distributed property is one of the major challenges. Most relative arts focus on traditional distributed tasks like images and voices, incapable of graph structures. This paper firstly reveals that local client distortion is brought by both node-level semantics and graph-level structure. First, for node-level semantics, we find that contrasting nodes from distinct classes is beneficial to provide a well-performing discrimination. We pull the local node towards the global node of the same class and push it away from the global node of different classes. Second, we postulate that a well-structural graph neural network possesses similarity for neighbors due to the inherent adjacency relationships. However, aligning each node with adjacent nodes hinders discrimination due to the potential class inconsistency. We transform the adjacency relationships into the similarity distribution and leverage the global model to distill the relation knowledge into the local model, which preserves the structural information and discriminability of the local model. Empirical results on three graph datasets manifest the superiority of the proposed method over its counterparts.

Federated Graph Semantic and Structural Learning

TL;DR

This work tackles non-IID challenges in federated graph learning by decoupling heterogeneity into node-level semantics and graph-level structure. It introduces FGSSL, comprising Federated Node Semantic Contrast (FNSC) to align local node representations with global class-consistent signals, and Federated Graph Structure Distillation (FGSD) to distill global neighborhood similarity into the local models, preserving structural information while maintaining discriminability. The approach shows consistent improvements over strong federated baselines on three graph benchmarks, with ablations confirming the value of both components. By leveraging the global model for calibration during local updates, FGSSL achieves better generalization without added communication rounds, offering a practical path for robust federated graph learning in heterogeneous environments.

Abstract

Federated graph learning collaboratively learns a global graph neural network with distributed graphs, where the non-independent and identically distributed property is one of the major challenges. Most relative arts focus on traditional distributed tasks like images and voices, incapable of graph structures. This paper firstly reveals that local client distortion is brought by both node-level semantics and graph-level structure. First, for node-level semantics, we find that contrasting nodes from distinct classes is beneficial to provide a well-performing discrimination. We pull the local node towards the global node of the same class and push it away from the global node of different classes. Second, we postulate that a well-structural graph neural network possesses similarity for neighbors due to the inherent adjacency relationships. However, aligning each node with adjacent nodes hinders discrimination due to the potential class inconsistency. We transform the adjacency relationships into the similarity distribution and leverage the global model to distill the relation knowledge into the local model, which preserves the structural information and discriminability of the local model. Empirical results on three graph datasets manifest the superiority of the proposed method over its counterparts.

Paper Structure

This paper contains 16 sections, 12 equations, 5 figures, 4 tables, 1 algorithm.

Figures (5)

  • Figure 1: Problem illustration. We present the structure and semantic level similarities among clients. The deeper color suggests a more similar representation of node and graph across different participants, while the shallower mean dissimilarity. (a) Semantic bias: clients show the inconsistency predicted class of node. (b) Structure bias: clients hold distinct similarities among neighborhood nodes. In this work, we conduct semantics-level and structure-level calibration to achieve better federated graph learning performance.
  • Figure 2: Architecture illustration of Federated Graph Semantic and Structural Learning (FGSSL). The left yellow box corresponds to the federated aggregation scheme (e.g. FedAvg), while the right grey box suggests the local training process. FGSSL includes two components: (a) Federated Node Semantic Contrast and (b) Federated Graph Structure Distillation. Best viewed in color. Zoom in for details.
  • Figure 3: Visualization of training curves of the average test accuracy with Communication Epochs 200 with Citeseer dataset. Please see \ref{['p:convergence']} for details.
  • Figure 4: Analysis on hyper-parameter in FGSSL. Node classification results on three datasets under different $\tau$ and $\omega$ values with M = 5, in which green represents Cora, yellow represents Pubmed, and blue represents Citeseer. Refer to \ref{['p:hyper_parameter']} for details.
  • Figure 5: Visualization of classification result. The figure number corresponds to the method on the Citeseer dataset with $m=5$. Logits are colored based on class labels.