Table of Contents
Fetching ...

JAMMEval: A Refined Collection of Japanese Benchmarks for Reliable VLM Evaluation

Issa Sugiura, Koki Maeda, Shuhei Kurita, Yusuke Oda, Daisuke Kawahara, Naoaki Okazaki

Abstract

Reliable evaluation is essential for the development of vision-language models (VLMs). However, Japanese VQA benchmarks have undergone far less iterative refinement than their English counterparts. As a result, many existing benchmarks contain issues such as ambiguous questions, incorrect answers, and instances that can be solved without visual grounding, undermining evaluation reliability and leading to misleading conclusions in model comparisons. To address these limitations, we introduce JAMMEval, a refined collection of Japanese benchmarks for reliable VLM evaluation. It is constructed by systematically refining seven existing Japanese benchmark datasets through two rounds of human annotation, improving both data quality and evaluation reliability. In our experiments, we evaluate open-weight and proprietary VLMs on JAMMEval and analyze the capabilities of recent models on Japanese VQA. We further demonstrate the effectiveness of our refinement by showing that the resulting benchmarks yield evaluation scores that better reflect model capability, exhibit lower run-to-run variance, and improve the ability to distinguish between models of different capability levels. We release our dataset and code to advance reliable evaluation of VLMs.

JAMMEval: A Refined Collection of Japanese Benchmarks for Reliable VLM Evaluation

Abstract

Reliable evaluation is essential for the development of vision-language models (VLMs). However, Japanese VQA benchmarks have undergone far less iterative refinement than their English counterparts. As a result, many existing benchmarks contain issues such as ambiguous questions, incorrect answers, and instances that can be solved without visual grounding, undermining evaluation reliability and leading to misleading conclusions in model comparisons. To address these limitations, we introduce JAMMEval, a refined collection of Japanese benchmarks for reliable VLM evaluation. It is constructed by systematically refining seven existing Japanese benchmark datasets through two rounds of human annotation, improving both data quality and evaluation reliability. In our experiments, we evaluate open-weight and proprietary VLMs on JAMMEval and analyze the capabilities of recent models on Japanese VQA. We further demonstrate the effectiveness of our refinement by showing that the resulting benchmarks yield evaluation scores that better reflect model capability, exhibit lower run-to-run variance, and improve the ability to distinguish between models of different capability levels. We release our dataset and code to advance reliable evaluation of VLMs.

Paper Structure

This paper contains 23 sections, 11 figures, 2 tables.

Figures (11)

  • Figure 1: Examples of inappropriate instances in existing Japanese VQA evaluation datasets. (a) Open-ended questions with inherent ambiguity that do not admit a unique correct answer. (b) Questions that can be answered with high confidence using only the text, without referring to the image. (c) Instances with incorrect ground-truth answers. (d) Subjective questions for which answers vary across annotators.
  • Figure 2: Construction pipeline of JAMMEval. Starting from seven seed datasets, all instances undergo two rounds of manual review and re-annotation to produce a refined benchmark collection.
  • Figure 3: An example of re-annotation. An ambiguous open-ended question is replaced with a specific, objectively answerable question targeting information visible in the image.
  • Figure 4: Breakdown of refinement operations per dataset. Identical instances required no modification; other categories indicate the type of correction applied.
  • Figure 5: Model performance on JAMMEval across seven tasks. Note that Gemini 3 Pro is evaluated with reasoning enabled, while all other models are evaluated without reasoning. Gemini 3 Pro achieves the highest scores overall.
  • ...and 6 more figures