Table of Contents
Fetching ...

M2-Verify: A Large-Scale Multidomain Benchmark for Checking Multimodal Claim Consistency

Abolfazl Ansari, Delvin Ce Zhang, Zhuoyang Zou, Wenpeng Yin, Dongwon Lee

Abstract

Evaluating scientific arguments requires assessing the strict consistency between a claim and its underlying multimodal evidence. However, existing benchmarks lack the scale, domain diversity, and visual complexity needed to evaluate this alignment realistically. To address this gap, we introduce M2-Verify, a large-scale multimodal dataset for checking scientific claim consistency. Sourced from PubMed and arXiv, M2-Verify provides over 469K instances across 16 domains, rigorously validated through expert audits. Extensive baseline experiments show that state-of-the-art models struggle to maintain robust consistency. While top models achieve up to 85.8\% Micro-F1 on low-complexity medical perturbations, performance drops to 61.6\% on high-complexity challenges like anatomical shifts. Furthermore, expert evaluations expose hallucinations when models generate scientific explanations for their alignment decisions. Finally, we demonstrate our dataset's utility and provide comprehensive usage guidelines.

M2-Verify: A Large-Scale Multidomain Benchmark for Checking Multimodal Claim Consistency

Abstract

Evaluating scientific arguments requires assessing the strict consistency between a claim and its underlying multimodal evidence. However, existing benchmarks lack the scale, domain diversity, and visual complexity needed to evaluate this alignment realistically. To address this gap, we introduce M2-Verify, a large-scale multimodal dataset for checking scientific claim consistency. Sourced from PubMed and arXiv, M2-Verify provides over 469K instances across 16 domains, rigorously validated through expert audits. Extensive baseline experiments show that state-of-the-art models struggle to maintain robust consistency. While top models achieve up to 85.8\% Micro-F1 on low-complexity medical perturbations, performance drops to 61.6\% on high-complexity challenges like anatomical shifts. Furthermore, expert evaluations expose hallucinations when models generate scientific explanations for their alignment decisions. Finally, we demonstrate our dataset's utility and provide comprehensive usage guidelines.

Paper Structure

This paper contains 32 sections, 3 figures, 9 tables.

Figures (3)

  • Figure 1: Representative examples from M2-Verify-Med and M2-Verify-Gen exemplifying its multimodal diversity. Verifying gastric balloon location (left) and model architecture (right) requires joint reasoning across figures and captions.
  • Figure 2: Overview of the M2-Verify framework. The pipeline integrates automated claim extraction visual dependency filtering domain specific perturbations and grounded explanations validated by multi phase expert audit.
  • Figure 3: Radar plots comparing zero-shot baselines (blue) and SFT claim verification performance (red) across diverse scientific domains on the M2-Verify benchmark.