Table of Contents
Fetching ...

First Logit Boosting: Visual Grounding Method to Mitigate Object Hallucination in Large Vision-Language Models

Jiwoo Ha, Jongwoo Baek, Jinhyun So

Abstract

Recent Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across various multimodal tasks that require understanding both visual and linguistic inputs. However, object hallucination -- the generation of nonexistent objects in answers -- remains a persistent challenge. Although several approaches such as retraining and external grounding methods have been proposed to mitigate this issue, they still suffer from high data costs or structural complexity. Training-free methods such as Contrastive Decoding (CD) are more cost-effective, avoiding additional training or external models, but still suffer from long-term decay, where visual grounding weakens and language priors dominate as the generation progresses. In this paper, we propose First Logit Boosting (FLB), a simple yet effective training-free technique designed to alleviate long-term decay in LVLMs. FLB stores the logit of the first generated token and adds it to subsequent token predictions, effectively mitigating long-term decay of visual information. We observe that FLB (1) sustains the visual information embedded in the first token throughout generation, and (2) suppresses hallucinated words through the stabilizing effect of the ``The'' token. Experimental results show that FLB significantly reduces object hallucination across various tasks, benchmarks, and backbone models. Notably, it causes negligible inference overhead, making it highly applicable to real-time multimodal systems. Code is available at https://github.com/jiwooha20/FLB

First Logit Boosting: Visual Grounding Method to Mitigate Object Hallucination in Large Vision-Language Models

Abstract

Recent Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across various multimodal tasks that require understanding both visual and linguistic inputs. However, object hallucination -- the generation of nonexistent objects in answers -- remains a persistent challenge. Although several approaches such as retraining and external grounding methods have been proposed to mitigate this issue, they still suffer from high data costs or structural complexity. Training-free methods such as Contrastive Decoding (CD) are more cost-effective, avoiding additional training or external models, but still suffer from long-term decay, where visual grounding weakens and language priors dominate as the generation progresses. In this paper, we propose First Logit Boosting (FLB), a simple yet effective training-free technique designed to alleviate long-term decay in LVLMs. FLB stores the logit of the first generated token and adds it to subsequent token predictions, effectively mitigating long-term decay of visual information. We observe that FLB (1) sustains the visual information embedded in the first token throughout generation, and (2) suppresses hallucinated words through the stabilizing effect of the ``The'' token. Experimental results show that FLB significantly reduces object hallucination across various tasks, benchmarks, and backbone models. Notably, it causes negligible inference overhead, making it highly applicable to real-time multimodal systems. Code is available at https://github.com/jiwooha20/FLB

Paper Structure

This paper contains 35 sections, 8 equations, 13 figures, 19 tables.

Figures (13)

  • Figure 1: Overview of First Logit Boosting (FLB). FLB stores the logit of the first generated token and reuses it during decoding, which leverages two complementary effects. (1) Direct visual grounding: the first token logit inherently carries stronger visual evidence (man) than hallucinated token (women), serving as an anchor that preserves visual cues weakened by positional drift. (2) Implicit visual referencing: by boosting the probability of starting a sentence with "The", FLB increases the likelihood of selecting nouns established before long-term decay occurs, thus maintaining referential coherence and mitigating hallucination.
  • Figure 2: Comparison of the probability between all ground truth (left) and hallucination (right) words across token steps for each mitigation method. As sentence length increases, hallucinated word logits become more dominant; while VCD, ICD, and M3ID fail to suppress this trend, FLB (ours) effectively mitigates hallucinated predictions.
  • Figure 3: This figure shows the logits of the ground truth words (left) and hallucination words (middle) for the first token during caption generation for a case image (right). The logit of ground truth words are generally higher than that of hallucination words.
  • Figure 4: Top 20 tokens by logit value for the first token prediction. The list includes common sentence-starting words such as "The", "In", and "A".
  • Figure 5: Inference speed comparison across decoding strategies. This speed is measured by the time it takes to generate each token. VCD/ICD/M3ID run about twice as slow as the baseline, while FLB (ours) maintains similar speed.
  • ...and 8 more figures