Table of Contents
Fetching ...

Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning

Cai Zhou, Zekai Wang, Menghua Wu, Qianyu Julie Zhu, Flora C. Shi, Chenyu Wang, Ashia Wilson, Tommi Jaakkola, Stephen Bates

Abstract

While test-time scaling has enabled large language models to solve highly difficult tasks, state-of-the-art results come at exorbitant compute costs. These inefficiencies can be attributed to the miscalibration of post-trained language models, and the lack of calibration in popular sampling techniques. Here, we present Online Reasoning Calibration (ORCA), a framework for calibrating the sampling process that draws upon conformal prediction and test-time training. Specifically, we introduce a meta-learning procedure that updates the calibration module for each input. This allows us to provide valid confidence estimates under distributional shift, e.g. in thought patterns that occur across different stages of reasoning, or in prompt distributions between model development and deployment. ORCA not only provides theoretical guarantees on conformal risks, but also empirically shows higher efficiency and generalization across different reasoning tasks. At risk level $δ=0.1$, ORCA improves Qwen2.5-32B efficiency on in-distribution tasks with savings up to 47.5% with supervised labels and 40.7% with self-consistency labels. Under zero-shot out-of-domain settings, it improves MATH-500 savings from 24.8% of the static calibration baseline to 67.0% while maintaining a low empirical error rate, and the same trend holds across model families and downstream benchmarks. Our code is publicly available at https://github.com/wzekai99/ORCA.

Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning

Abstract

While test-time scaling has enabled large language models to solve highly difficult tasks, state-of-the-art results come at exorbitant compute costs. These inefficiencies can be attributed to the miscalibration of post-trained language models, and the lack of calibration in popular sampling techniques. Here, we present Online Reasoning Calibration (ORCA), a framework for calibrating the sampling process that draws upon conformal prediction and test-time training. Specifically, we introduce a meta-learning procedure that updates the calibration module for each input. This allows us to provide valid confidence estimates under distributional shift, e.g. in thought patterns that occur across different stages of reasoning, or in prompt distributions between model development and deployment. ORCA not only provides theoretical guarantees on conformal risks, but also empirically shows higher efficiency and generalization across different reasoning tasks. At risk level , ORCA improves Qwen2.5-32B efficiency on in-distribution tasks with savings up to 47.5% with supervised labels and 40.7% with self-consistency labels. Under zero-shot out-of-domain settings, it improves MATH-500 savings from 24.8% of the static calibration baseline to 67.0% while maintaining a low empirical error rate, and the same trend holds across model families and downstream benchmarks. Our code is publicly available at https://github.com/wzekai99/ORCA.

Paper Structure

This paper contains 33 sections, 2 theorems, 22 equations, 5 figures, 10 tables, 2 algorithms.

Key Result

Lemma A.1

Fix any threshold $\lambda\in\Lambda$. If $\{(X_i,Y_i)\}_{i=1}^{n+1}$ are exchangeable and $\{U_i\}_{i=1}^{n+1}$ are i.i.d. and independent of $\{(X_i,Y_i)\}$, then the sequence $\{R_i(\lambda)\}_{i=1}^{n+1}$ with $R_i(\lambda):=R(\lambda;X_i,Y_i,U_i)$ is exchangeable (indeed i.i.d. under the i.i.d.

Figures (5)

  • Figure 1: Framework of Online Reasoning Calibration (ORCA).
  • Figure 2: Compute savings vs. risk tolerance $\delta$ for supervised (left) and consistent (right) labels (Qwen2.5-32B). TTT no-QK consistently outperforms the baseline across all risk levels, with the largest gap at low $\delta$.
  • Figure 3: Actual error rate vs. target risk $\delta$ (supervised, Qwen2.5-32B). All methods track the diagonal, confirming valid risk control. Points below the diagonal satisfy the LTT guarantee.
  • Figure 4: Distribution of per-problem savings at $\delta{=}0.1$ (supervised, Qwen2.5-32B, 902 problems). Solid lines: mean; dashed lines: median. TTT no-QK shifts the distribution toward higher savings across the full range.
  • Figure 5: Probe score trajectories for a test problem (Qwen2.5-32B, $\delta{=}0.1$). The green line marks the first correct step. The static probe (top) never crosses its threshold and saves 0%. The TTT no-QK probe (bottom) crosses the threshold at step 22 and saves 41%.

Theorems & Definitions (6)

  • Lemma A.1: Intra-instance adaptation preserves inter-instance exchangeability
  • proof
  • Theorem A.2: Finite-sample risk control of ORCA via LTT fixed-sequence testing
  • proof
  • Remark A.3: Marginal guarantee
  • Remark A.4: General bounded risks