VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models
Yuxuan Wang, Yueqian Wang, Dongyan Zhao, Cihang Xie, Zilong Zheng
TL;DR
VideoHallucer tackles the pervasive problem of hallucinations in large video-language models by introducing a first-of-its-kind LVLM-focused benchmark that distinguishes intrinsic and extrinsic hallucinations across five settings. It employs adversarial binary VideoQA to robustly evaluate model grounding and bias, and analyzes performance across a broad set of LVLMs, revealing scaling benefits for basic cues but limited gains for extrinsic factual hallucinations. The work further introduces Self-PEP, a plug-in framework that improves hallucination resistance through predict-explain-predict cycles, achieving notable gains. Together, VideoHallucer and Self-PEP offer a comprehensive toolkit for diagnosing and mitigating hallucinations in video-language understanding, with implications for safer, more reliable multimodal AI systems.
Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have extended their capabilities to video understanding. Yet, these models are often plagued by "hallucinations", where irrelevant or nonsensical content is generated, deviating from the actual video context. This work introduces VideoHallucer, the first comprehensive benchmark for hallucination detection in large video-language models (LVLMs). VideoHallucer categorizes hallucinations into two main types: intrinsic and extrinsic, offering further subcategories for detailed analysis, including object-relation, temporal, semantic detail, extrinsic factual, and extrinsic non-factual hallucinations. We adopt an adversarial binary VideoQA method for comprehensive evaluation, where pairs of basic and hallucinated questions are crafted strategically. By evaluating eleven LVLMs on VideoHallucer, we reveal that i) the majority of current models exhibit significant issues with hallucinations; ii) while scaling datasets and parameters improves models' ability to detect basic visual cues and counterfactuals, it provides limited benefit for detecting extrinsic factual hallucinations; iii) existing models are more adept at detecting facts than identifying hallucinations. As a byproduct, these analyses further instruct the development of our self-PEP framework, achieving an average of 5.38% improvement in hallucination resistance across all model architectures.
