Table of Contents
Fetching ...

ACT Now: Preempting LVLM Hallucinations via Adaptive Context Integration

Bei Yan, Yuecong Min, Jie Zhang, Shiguang Shan, Xilin Chen

Abstract

Large Vision-Language Models (LVLMs) frequently suffer from severe hallucination issues. Existing mitigation strategies predominantly rely on isolated, single-step states to enhance visual focus or suppress strong linguistic priors. However, these static approaches neglect dynamic context changes across the generation process and struggles to correct inherited information loss. To address this limitation, we propose Adaptive Context inTegration (ACT), a training-free inference intervention method that mitigates hallucination through the adaptive integration of contextual information. Specifically, we first propose visual context exploration, which leverages spatio-temporal profiling to adaptively amplify attention heads responsible for visual exploration. To further facilitate vision-language alignment, we propose semantic context aggregation that marginalizes potential semantic queries to effectively aggregate visual evidence, thereby resolving the information loss caused by the discrete nature of token prediction. Extensive experiments across diverse LVLMs demonstrate that ACT significantly reduces hallucinations and achieves competitive results on both discriminative and generative benchmarks, acting as a robust and highly adaptable solution without compromising fundamental generation capabilities.

ACT Now: Preempting LVLM Hallucinations via Adaptive Context Integration

Abstract

Large Vision-Language Models (LVLMs) frequently suffer from severe hallucination issues. Existing mitigation strategies predominantly rely on isolated, single-step states to enhance visual focus or suppress strong linguistic priors. However, these static approaches neglect dynamic context changes across the generation process and struggles to correct inherited information loss. To address this limitation, we propose Adaptive Context inTegration (ACT), a training-free inference intervention method that mitigates hallucination through the adaptive integration of contextual information. Specifically, we first propose visual context exploration, which leverages spatio-temporal profiling to adaptively amplify attention heads responsible for visual exploration. To further facilitate vision-language alignment, we propose semantic context aggregation that marginalizes potential semantic queries to effectively aggregate visual evidence, thereby resolving the information loss caused by the discrete nature of token prediction. Extensive experiments across diverse LVLMs demonstrate that ACT significantly reduces hallucinations and achieves competitive results on both discriminative and generative benchmarks, acting as a robust and highly adaptable solution without compromising fundamental generation capabilities.

Paper Structure

This paper contains 16 sections, 8 equations, 5 figures, 6 tables.

Figures (5)

  • Figure 1: Illustration of temporal evolution in cross-modal attention during generation. (Left) Visual attention typically surges before the model predicts the corresponding token. (Top Right) Attention heads show distinct behavioral patterns during decoding. (Bottom Right) Discretization of low-confidence predictions causes information loss, resulting in poor grounding and potential hallucinations.
  • Figure 2: Overview of the proposed ACT method. (Left) VCE amplifies dynamic heads to capture broader visual evidence. (Right) SCA marginalizes parallel textual hypotheses to preemptively resolve local linguistic uncertainty.
  • Figure 3: Visualizations of ablation study results. (A) Impact of the guidance shift between dynamic and static heads. (B) Robustness to calibration set size and source domain. (C) Positional distribution of hallucinations in CHAIR evaluation.
  • Figure 4: Effect of the guidance scale of ACT on discriminative and generative tasks.
  • Figure 5: Qualitative comparison of visual attention maps between the baseline and our proposed ACT on LLaVA-1.5-7B.