Table of Contents
Fetching ...

Visual Attention Drifts,but Anchors Hold:Mitigating Hallucination in Multimodal Large Language Models via Cross-Layer Visual Anchors

Chengxu Yang, Jingling Yuan, Chuang Hu, Jiawei Jiang

Abstract

Multimodal Large Language Models often suffer from object hallucination. While existing research utilizes attention enhancement and visual retracing, we find these works lack sufficient interpretability regarding attention drift in final model stages. In this paper, we investigate the layer wise evolution of visual features and discover that hallucination stems from deep layer attention regressing toward initial visual noise from early layers. We observe that output reliability depends on acquiring visual anchors at intermediate layers rather than final layers. Based on these insights, we propose CLVA, which stands for Cross-Layer Visual Anchors, a training free method that reinforces critical mid layer features while suppressing regressive noise. This approach effectively pulls deep layer attention back to correct visual regions by utilizing essential anchors captured from attention dynamics. We evaluate our method across diverse architectures and benchmarks, demonstrating outstanding performance without significant increase in computational time and GPU memory.

Visual Attention Drifts,but Anchors Hold:Mitigating Hallucination in Multimodal Large Language Models via Cross-Layer Visual Anchors

Abstract

Multimodal Large Language Models often suffer from object hallucination. While existing research utilizes attention enhancement and visual retracing, we find these works lack sufficient interpretability regarding attention drift in final model stages. In this paper, we investigate the layer wise evolution of visual features and discover that hallucination stems from deep layer attention regressing toward initial visual noise from early layers. We observe that output reliability depends on acquiring visual anchors at intermediate layers rather than final layers. Based on these insights, we propose CLVA, which stands for Cross-Layer Visual Anchors, a training free method that reinforces critical mid layer features while suppressing regressive noise. This approach effectively pulls deep layer attention back to correct visual regions by utilizing essential anchors captured from attention dynamics. We evaluate our method across diverse architectures and benchmarks, demonstrating outstanding performance without significant increase in computational time and GPU memory.

Paper Structure

This paper contains 27 sections, 15 equations, 10 figures, 6 tables.

Figures (10)

  • Figure 1: Matrix visualization of visual attention intensity across 32 layers and 32 heads in LLaVA. Colors represent the average attention weights to visual tokens $T_{vis}$, while numbers denote the average attention weights to system and text prompt tokens.
  • Figure 2: Attention analysis on LLaVA-1.5-7B. (a) Heatmap visualization of attention patterns across different layers. (b) Statistical metrics where the blue line represents the average attention entropy of vision sensitive heads. The green and red lines denote the Pearson correlation coefficients between the attention distributions of each layer and the mean attention of insensitive heads in initial layers and sensitive heads in intermediate layers, respectively.
  • Figure 3: Architecture of CLVA. Based on our attention analysis, we propose a two-step process to anchor drifting attention, where positive anchoring anchors core visual semantic features while negative anchoring excludes background noise using insensitive heads.
  • Figure 4: Results of LLaVA1.5 on MME-Fullset.
  • Figure 5: Ablation study of $\alpha$ and $\beta$
  • ...and 5 more figures