Table of Contents
Fetching ...

LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

Cheng Xu, Changhong Jin, Yingjie Niu, Nan Yan, Yuke Mei, Shuhao Guan, Liming Chen, M-Tahar Kechadi

Abstract

The rapid development of Large Language Models (LLMs) has transformed fake news detection and fact-checking tasks from simple classification to complex reasoning. However, evaluation frameworks have not kept pace. Current benchmarks are static, making them vulnerable to benchmark data contamination (BDC) and ineffective at assessing reasoning under temporal uncertainty. To address this, we introduce LiveFact a continuously updated benchmark that simulates the real-world "fog of war" in misinformation detection. LiveFact uses dynamic, temporal evidence sets to evaluate models on their ability to reason with evolving, incomplete information rather than on memorized knowledge. We propose a dual-mode evaluation: Classification Mode for final verification and Inference Mode for evidence-based reasoning, along with a component to monitor BDC explicitly. Tests with 22 LLMs show that open-source Mixture-of-Experts models, such as Qwen3-235B-A22B, now match or outperform proprietary state-of-the-art systems. More importantly, our analysis finds a significant "reasoning gap." Capable models exhibit epistemic humility by recognizing unverifiable claims in early data slices-an aspect traditional static benchmarks overlook. LiveFact sets a sustainable standard for evaluating robust, temporally aware AI verification.

LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

Abstract

The rapid development of Large Language Models (LLMs) has transformed fake news detection and fact-checking tasks from simple classification to complex reasoning. However, evaluation frameworks have not kept pace. Current benchmarks are static, making them vulnerable to benchmark data contamination (BDC) and ineffective at assessing reasoning under temporal uncertainty. To address this, we introduce LiveFact a continuously updated benchmark that simulates the real-world "fog of war" in misinformation detection. LiveFact uses dynamic, temporal evidence sets to evaluate models on their ability to reason with evolving, incomplete information rather than on memorized knowledge. We propose a dual-mode evaluation: Classification Mode for final verification and Inference Mode for evidence-based reasoning, along with a component to monitor BDC explicitly. Tests with 22 LLMs show that open-source Mixture-of-Experts models, such as Qwen3-235B-A22B, now match or outperform proprietary state-of-the-art systems. More importantly, our analysis finds a significant "reasoning gap." Capable models exhibit epistemic humility by recognizing unverifiable claims in early data slices-an aspect traditional static benchmarks overlook. LiveFact sets a sustainable standard for evaluating robust, temporally aware AI verification.

Paper Structure

This paper contains 33 sections, 6 equations, 15 figures, 12 tables.

Figures (15)

  • Figure 1: Cost-performance trade-off on LiveFact (Nov. 2025). Qwen3-235B-A22B-Instruct-2507 achieves the best performance (72.4%), while Qwen3-30B-A3B-Instruct-2507 provides optimal cost-efficiency at 14× lower cost than comparable GPT models.
  • Figure 2: The overall framework of the LiveFact Benchmark. (A) The Monthly Development Pipeline illustrates the continuous process of acquiring real-time events, generating claims and context via LLMs, and performing human verification. (B) The Problem Formalism specifies the task as conditional reasoning under temporal constraints, utilizing time-sliced evidence sets (e.g., Pre-Event $E^{(-3)}$ vs. Post-Event $E^{(+3)}$) to simulate the "fog of war." (C) The Evaluation framework details the Dual-Mode approach (separating Prediction vs. Inference capabilities) and the integration of the SSA Framework to quantify BDC risk via entity shift mechanism.
  • Figure 3: Temporal performance evolution for select model families across $\delta \in \{-3, 0, +3\}$. Panel (a) shows Classification Mode, where scores drop at $\delta=-3$ due to the lack of definitive evidence. Panel (b) shows Inference Mode, where robust models recover accuracy by correctly predicting "Ambiguous," narrowing the performance gap.
  • Figure 4: Analysis of the "Reasoning Gap" (Inference Score $-$ Classification Score at $\delta=-3$). Models with large positive gaps (green bars) effectively detect information voids ("Uncertainty Aware"). Models with negative or near-zero gaps (red bars) exhibit two failure modes: instruction-tuned models are genuinely "Overconfident," while base models ($\dagger$) fail due to format non-compliance rather than reasoning deficits.
  • Figure 5: Prompt for Context Generation
  • ...and 10 more figures