Table of Contents
Fetching ...

Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models

Sohyeon Kim, Sang Yeon Yoon, Kyeongbo Kong

Abstract

Large Vision-Language Models (LVLMs) have achieved impressive progress in multimodal reasoning, yet they remain prone to object hallucinations, generating descriptions of objects that are not present in the input image. Recent approaches attempt to mitigate hallucinations by suppressing unreliable visual signals in the vision encoder, but many rely on iterative optimization for each input, resulting in substantial inference latency. In this work, we investigate the internal attention dynamics of vision encoders in LVLMs and identify a consistent three-phase structure of visual information processing: diffusion, focus, and rediffusion. Our analysis reveals that hallucination behavior is particularly sensitive to tokens receiving low attention during the focus phase. Motivated by this observation, we propose a lightweight inference-time intervention that selectively suppresses such tokens during the focus phase. The method operates in a training-free manner using statistics from a single forward pass and employs a Determinantal Point Process (DPP) to preserve diverse visual cues while filtering redundant tokens. Extensive experiments across multiple LVLM backbones and decoding strategies demonstrate that the proposed approach consistently reduces hallucination metrics while maintaining competitive caption quality. Moreover, compared to adversarial uncertainty estimation methods, our approach achieves comparable hallucination mitigation with negligible additional inference latency.

Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models

Abstract

Large Vision-Language Models (LVLMs) have achieved impressive progress in multimodal reasoning, yet they remain prone to object hallucinations, generating descriptions of objects that are not present in the input image. Recent approaches attempt to mitigate hallucinations by suppressing unreliable visual signals in the vision encoder, but many rely on iterative optimization for each input, resulting in substantial inference latency. In this work, we investigate the internal attention dynamics of vision encoders in LVLMs and identify a consistent three-phase structure of visual information processing: diffusion, focus, and rediffusion. Our analysis reveals that hallucination behavior is particularly sensitive to tokens receiving low attention during the focus phase. Motivated by this observation, we propose a lightweight inference-time intervention that selectively suppresses such tokens during the focus phase. The method operates in a training-free manner using statistics from a single forward pass and employs a Determinantal Point Process (DPP) to preserve diverse visual cues while filtering redundant tokens. Extensive experiments across multiple LVLM backbones and decoding strategies demonstrate that the proposed approach consistently reduces hallucination metrics while maintaining competitive caption quality. Moreover, compared to adversarial uncertainty estimation methods, our approach achieves comparable hallucination mitigation with negligible additional inference latency.

Paper Structure

This paper contains 42 sections, 20 equations, 26 figures, 4 tables.

Figures (26)

  • Figure 1: Runtime comparison and hallucination mitigation performance (CHAIR) across recent LVLMs.
  • Figure 2: Layer-wise attention dynamics across various LVLM backbones. (a) The progression of the maximum attention score to entropy ratio ($R^{(l)}$) across vision encoder layers. (b) Visualization of attention maps, demonstrating a consistent three-phase visual processing structure: diffusion, focus, and rediffusion.
  • Figure 3: Impact of masking strategies across different processing phases on hallucination metrics. Masking visual tokens during the focus phase (Mask 2) effectively reduces hallucinations (indicated by lower $\mathrm{CHAIR}_{S}$(CS) and $\mathrm{CHAIR}_{I}$(CI) scores) while preserving object recognition capabilities (F1 score), highlighting the focus phase as the effective intervention point.
  • Figure 4: Qualitative examples of hallucination behavior across masking phases. GT captions are shown for reference, and hallucinated statements are highlighted in red. While no masking or masking in the diffusion (Phase 1) and rediffusion (Phase 3) phases produces inconsistent results, masking in the focus phase (Phase 2) consistently reduces hallucinations and yields captions more consistent with the image content.
  • Figure 5: Visual Attention Ratio analysis under different masking conditions.Left: Distribution of image-level mean VAR across masking settings. Masking in the focus phase(Phase 2) yields a significantly higher VAR compared to the baseline (No masking) ($p<0.001$). Right: Layer-head VAR heatmaps showing increased visual attention in intermediate layers when DPP masking is applied.
  • ...and 21 more figures