Table of Contents
Fetching ...

Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality

Xiaoyuan Zhu, Kimberly Le Truong, Riccardo Fogliato, Gokul Swamy, Weijian Zhang, Minglai Yang, Longtian Ye, Bangya Liu, Minghao Liu, Andrew Ilyas, Steven Wu

Abstract

As LLMs are deployed in high-stakes settings, users must judge the correctness of individual responses, often relying on model-generated justifications such as reasoning chains or explanations. Yet, no standard measure exists for whether these justifications help users distinguish correct answers from incorrect ones. We formalize this idea as error verifiability and propose $v_{\text{bal}}$, a balanced metric that measures whether justifications enable raters to accurately assess answer correctness, validated against human raters who show high agreement. We find that neither common approaches, such as post-training and model scaling, nor more targeted interventions recommended improve verifiability. We introduce two methods that succeed at improving verifiability: reflect-and-rephrase (RR) for mathematical reasoning and oracle-rephrase (OR) for factual QA, both of which improve verifiability by incorporating domain-appropriate external information. Together, our results establish error verifiability as a distinct dimension of response quality that does not emerge from accuracy improvements alone and requires dedicated, domain-aware methods to address.

Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality

Abstract

As LLMs are deployed in high-stakes settings, users must judge the correctness of individual responses, often relying on model-generated justifications such as reasoning chains or explanations. Yet, no standard measure exists for whether these justifications help users distinguish correct answers from incorrect ones. We formalize this idea as error verifiability and propose , a balanced metric that measures whether justifications enable raters to accurately assess answer correctness, validated against human raters who show high agreement. We find that neither common approaches, such as post-training and model scaling, nor more targeted interventions recommended improve verifiability. We introduce two methods that succeed at improving verifiability: reflect-and-rephrase (RR) for mathematical reasoning and oracle-rephrase (OR) for factual QA, both of which improve verifiability by incorporating domain-appropriate external information. Together, our results establish error verifiability as a distinct dimension of response quality that does not emerge from accuracy improvements alone and requires dedicated, domain-aware methods to address.

Paper Structure

This paper contains 95 sections, 6 equations, 3 figures, 14 tables.

Figures (3)

  • Figure 1: $v_{\text{bal}}$ and accuracy across post-training checkpoints for Tulu3.1-8B and OLMo2-7B.
  • Figure 2: Accuracy vs. $v_{\text{bal}}$ across models and datasets.
  • Figure 3: Effect of calibrating linguistic confidence on $v_{\text{bal}}$ across three models and benchmarks (MATH500, MMLU, MMLU-Pro). Each curve sweeps the fraction $k$ of least-confident responses that were rephrased to express uncertainty; $\boldsymbol{\times}$, $\boldsymbol{\times}$, $\boldsymbol{\times}$ mark the $k$ maximizing $v_{\text{bal}}$ for each confidence method. Optima predominantly cluster around $k{=}100\%$, suggesting that uniform hedging is as effective as targeted uncertainty calibration.