Table of Contents
Fetching ...

Learning Structural-Functional Brain Representations through Multi-Scale Adaptive Graph Attention for Cognitive Insight

Badhan Mazumder, Sir-Lord Wiafe, Aline Kotoski, Vince D. Calhoun, Dong Hye Ye

Abstract

Understanding how brain structure and function interact is key to explaining intelligence yet modeling them jointly is challenging as the structural and functional connectome capture complementary aspects of organization. We introduced Multi-scale Adaptive Graph Network (MAGNet), a Transformer-style graph neural network framework that adaptively learns structure-function interactions. MAGNet leverages source-based morphometry from structural MRI to extract inter-regional morphological features and fuses them with functional network connectivity from resting-state fMRI. A hybrid graph integrates direct and indirect pathways, while local-global attention refines connectivity importance and a joint loss simultaneously enforces cross-modal coherence and optimizes the prediction objective end-to-end. On the ABCD dataset, MAGNet outperformed relevant baselines, demonstrating effective multimodal integration for advancing our understanding of cognitive function.

Learning Structural-Functional Brain Representations through Multi-Scale Adaptive Graph Attention for Cognitive Insight

Abstract

Understanding how brain structure and function interact is key to explaining intelligence yet modeling them jointly is challenging as the structural and functional connectome capture complementary aspects of organization. We introduced Multi-scale Adaptive Graph Network (MAGNet), a Transformer-style graph neural network framework that adaptively learns structure-function interactions. MAGNet leverages source-based morphometry from structural MRI to extract inter-regional morphological features and fuses them with functional network connectivity from resting-state fMRI. A hybrid graph integrates direct and indirect pathways, while local-global attention refines connectivity importance and a joint loss simultaneously enforces cross-modal coherence and optimizes the prediction objective end-to-end. On the ABCD dataset, MAGNet outperformed relevant baselines, demonstrating effective multimodal integration for advancing our understanding of cognitive function.

Paper Structure

This paper contains 14 sections, 3 equations, 3 figures, 1 table.

Figures (3)

  • Figure 1: (A) After generating FNC from rs-fMRI and SBM from sMRI, a hybrid brain graph was constructed with unimodal, cross-modal (CMC) and multi-scale detour connections (MDC) measuring structural detour (SD). (B) MAGNet's flowchart: scaled dot-product attention enabled local message passing with node features and edge attributes, followed by multi-head self-attention for global refinement for $l$ layers. Refined node embeddings then underwent global average pooling (GAP) and a fully connected layer (FCL) for intelligence score prediction optimized with a joint loss.
  • Figure 2: Outcomes of the performed ablation experiments (mean$\pm$standard deviation).
  • Figure 3: Top 3% significant brain network connections identified for (a) fluid, (b) crystallized, and (c) total intelligence.