Table of Contents
Fetching ...

Structural Graph Probing of Vision-Language Models

Haoyu He, Yue Zhuo, Yu Zheng, Qi R. Wang

Abstract

Vision-language models (VLMs) achieve strong multimodal performance, yet how computation is organized across populations of neurons remains poorly understood. In this work, we study VLMs through the lens of neural topology, representing each layer as a within-layer correlation graph derived from neuron-neuron co-activations. This view allows us to ask whether population-level structure is behaviorally meaningful, how it changes across modalities and depth, and whether it identifies causally influential internal components under intervention. We show that correlation topology carries recoverable behavioral signal; moreover, cross-modal structure progressively consolidates with depth around a compact set of recurrent hub neurons, whose targeted perturbation substantially alters model output. Neural topology thus emerges as a meaningful intermediate scale for VLM interpretability: richer than local attribution, more tractable than full circuit recovery, and empirically tied to multimodal behavior. Code is publicly available at https://github.com/he-h/vlm-graph-probing.

Structural Graph Probing of Vision-Language Models

Abstract

Vision-language models (VLMs) achieve strong multimodal performance, yet how computation is organized across populations of neurons remains poorly understood. In this work, we study VLMs through the lens of neural topology, representing each layer as a within-layer correlation graph derived from neuron-neuron co-activations. This view allows us to ask whether population-level structure is behaviorally meaningful, how it changes across modalities and depth, and whether it identifies causally influential internal components under intervention. We show that correlation topology carries recoverable behavioral signal; moreover, cross-modal structure progressively consolidates with depth around a compact set of recurrent hub neurons, whose targeted perturbation substantially alters model output. Neural topology thus emerges as a meaningful intermediate scale for VLM interpretability: richer than local attribution, more tractable than full circuit recovery, and empirically tied to multimodal behavior. Code is publicly available at https://github.com/he-h/vlm-graph-probing.

Paper Structure

This paper contains 29 sections, 9 equations, 8 figures, 5 tables.

Figures (8)

  • Figure 1: Overview of neural topology construction.
  • Figure 2: Sparsity robustness and depth dependence of graph probing. Left: probing accuracy as a function of graph sparsity (top 1%–20% of neuron correlations retained). Right: probing accuracy across normalized layer depth. Accuracy is stable across sparsity levels, while depth-wise peak predictiveness differs by architecture.
  • Figure 3: Token-level cross-modal correlation dynamics across depth. Layer-wise token–token correlations for Vision–Vision, Vision–Text, and Text–Text pairs on TDIUC (mean $\pm$ std) across multiple VLM families and scales. Vision–Text correlations increase with depth, consistent with progressively stronger multimodal integration in later layers.
  • Figure 4: Cross-sample stability of hub definitions. Recurrence of top 1% hubs across samples on TDIUC for graph-wide, modality-specific, and activation-based hub definitions. Graph-derived hubs are the most stable, indicating that structurally central neurons occupy more persistent roles than alternative hub candidates.
  • Figure 5: Layer-wise stability of graph hubs. Recurrence of top 1% graph-derived hubs across samples at different depths on TDIUC. Intermediate layers show the strongest hub stability, suggesting the most persistent population-level organization emerges in mid-depth representations.
  • ...and 3 more figures