Table of Contents
Fetching ...

DSBD: Dual-Aligned Structural Basis Distillation for Graph Domain Adaptation

Yingxu Wang, Kunyu Zhang, Jiaxin Huang, Mengzhu Wang, Mingyan Xiao, Siyang Gao, Nan Yin

Abstract

Graph domain adaptation (GDA) aims to transfer knowledge from a labeled source graph to an unlabeled target graph under distribution shifts. However, existing methods are largely feature-centric and overlook structural discrepancies, which become particularly detrimental under significant topology shifts. Such discrepancies alter both geometric relationships and spectral properties, leading to unreliable transfer of graph neural networks (GNNs). To address this limitation, we propose Dual-Aligned Structural Basis Distillation (DSBD) for GDA, a novel framework that explicitly models and adapts cross-domain structural variation. DSBD constructs a differentiable structural basis by synthesizing continuous probabilistic prototype graphs, enabling gradient-based optimization over graph topology. The basis is learned under source-domain supervision to preserve semantic discriminability, while being explicitly aligned to the target domain through a dual-alignment objective. Specifically, geometric consistency is enforced via permutation-invariant topological moment matching, and spectral consistency is achieved through Dirichlet energy calibration, jointly capturing structural characteristics across domains. Furthermore, we introduce a decoupled inference paradigm that mitigates source-specific structural bias by training a new GNN on the distilled structural basis. Extensive experiments on graph and image benchmarks demonstrate that DSBD consistently outperforms state-of-the-art methods.

DSBD: Dual-Aligned Structural Basis Distillation for Graph Domain Adaptation

Abstract

Graph domain adaptation (GDA) aims to transfer knowledge from a labeled source graph to an unlabeled target graph under distribution shifts. However, existing methods are largely feature-centric and overlook structural discrepancies, which become particularly detrimental under significant topology shifts. Such discrepancies alter both geometric relationships and spectral properties, leading to unreliable transfer of graph neural networks (GNNs). To address this limitation, we propose Dual-Aligned Structural Basis Distillation (DSBD) for GDA, a novel framework that explicitly models and adapts cross-domain structural variation. DSBD constructs a differentiable structural basis by synthesizing continuous probabilistic prototype graphs, enabling gradient-based optimization over graph topology. The basis is learned under source-domain supervision to preserve semantic discriminability, while being explicitly aligned to the target domain through a dual-alignment objective. Specifically, geometric consistency is enforced via permutation-invariant topological moment matching, and spectral consistency is achieved through Dirichlet energy calibration, jointly capturing structural characteristics across domains. Furthermore, we introduce a decoupled inference paradigm that mitigates source-specific structural bias by training a new GNN on the distilled structural basis. Extensive experiments on graph and image benchmarks demonstrate that DSBD consistently outperforms state-of-the-art methods.

Paper Structure

This paper contains 35 sections, 2 theorems, 15 equations, 10 figures, 20 tables, 1 algorithm.

Key Result

theorem 1

Let $f$ denote the graph encoder and $h$ the classifier. Let $\mathcal{R}_{\mathcal{D}_T}(h \circ f)$ be the expected risk on the target domain $\mathcal{D}_T$, and $\hat{\mathcal{R}}_{\mathcal{S}_{\mathrm{syn}}}(h \circ f)$ the empirical risk evaluated on the synthesized structural basis $\mathcal{ where $\mathcal{M}$ and $\Omega$ denote the geometric moments and Dirichlet energy, respectively. $

Figures (10)

  • Figure 1: The key challenges in GDA: (a) The lack of a differentiable structural substrate prevents explicit optimization of cross-domain topology. (b) Geometric alignment does not guarantee spectral consistency, leading to mismatched GNN filtering across domains. (c) Source-specific structural bias entangles message passing, resulting in aggregation mismatch under topology shift.
  • Figure 2: Overview of the proposed DSBD, which consists of two key steps: (1) Dual-Aligned Structural Basis Distillation, which constructs a differentiable structural substrate by distilling source knowledge into compact synthetic graphs and enforcing joint geometric and spectral alignment with the target domain; (2) Structurally Calibrated Target Inference, which eliminates source structural bias via decoupled retraining on the distilled basis, ensuring topology-aware transfer under domain shift.
  • Figure 3: T-SNE visualizations on the Mutagenicity dataset for DSBD and baselines.
  • Figure 4: Sensitivity analysis of the number of synthetic bases $K$ and balance coefficient ($\lambda_1$, $\lambda_2$) on the Mutagenicity dataset.
  • Figure 5: Distribution of Dirichlet energy and graph density of distilled basis between DSBD and baselines.
  • ...and 5 more figures

Theorems & Definitions (2)

  • theorem 1: Generalization Bound via Dual-Aligned Structural Basis
  • theorem 2: Generalization Benefit of Structural Bias Isolation