Table of Contents
Fetching ...

Early Stopping for Large Reasoning Models via Confidence Dynamics

Parsa Hosseini, Sumit Nawathe, Mahdi Salmani, Meisam Razaviyayn, Soheil Feizi

Abstract

Large reasoning models rely on long chain-of-thought generation to solve complex problems, but extended reasoning often incurs substantial computational cost and can even degrade performance due to overthinking. A key challenge is determining when the model should stop reasoning and produce the final answer. In this work, we study the confidence of intermediate answers during reasoning and observe two characteristic behaviors: correct reasoning trajectories often reach high-confidence answers early, while incorrect rollouts tend to produce long, unproductive reasoning traces and exhibit less reliable confidence dynamics. Motivated by these observations, we propose CoDE-Stop (Confidence Dynamics Early Stop), an early stopping method that leverages the dynamics of intermediate answer confidence to decide when to terminate reasoning, requiring no additional training and easily integrating into existing models. We evaluate CoDE-Stop on diverse reasoning and science benchmarks across multiple models. Compared to prior early stopping methods, it achieves a more favorable accuracy-compute tradeoff and reduces total token usage by 25-50% compared to standard full-length reasoning. In addition, we provide analyses of confidence dynamics during reasoning, offering insights into how confidence changes in both correct and incorrect trajectories.

Early Stopping for Large Reasoning Models via Confidence Dynamics

Abstract

Large reasoning models rely on long chain-of-thought generation to solve complex problems, but extended reasoning often incurs substantial computational cost and can even degrade performance due to overthinking. A key challenge is determining when the model should stop reasoning and produce the final answer. In this work, we study the confidence of intermediate answers during reasoning and observe two characteristic behaviors: correct reasoning trajectories often reach high-confidence answers early, while incorrect rollouts tend to produce long, unproductive reasoning traces and exhibit less reliable confidence dynamics. Motivated by these observations, we propose CoDE-Stop (Confidence Dynamics Early Stop), an early stopping method that leverages the dynamics of intermediate answer confidence to decide when to terminate reasoning, requiring no additional training and easily integrating into existing models. We evaluate CoDE-Stop on diverse reasoning and science benchmarks across multiple models. Compared to prior early stopping methods, it achieves a more favorable accuracy-compute tradeoff and reduces total token usage by 25-50% compared to standard full-length reasoning. In addition, we provide analyses of confidence dynamics during reasoning, offering insights into how confidence changes in both correct and incorrect trajectories.

Paper Structure

This paper contains 19 sections, 4 equations, 11 figures, 7 tables.

Figures (11)

  • Figure 1: Accuracy vs. compute cost on Qwen3-4B averaged over 4 reasoning and science benchmarks. Left: reasoning length only. Right: total token compute including intermediate answer-generation overhead. CoDE-Stop achieves a stronger accuracy–compute tradeoff among early stopping methods.
  • Figure 2: Confidence dynamics across reasoning trajectories. Correct trajectories reach high confidence early, while incorrect trajectories exhibit unstable and fluctuating confidence.
  • Figure 3: Incorrect trajectories are longer and exhibit a heavy-tailed distribution.
  • Figure 4: Confidence and degeneration dynamics over reasoning steps. Left: Average confidence for correct and incorrect rollouts; confidence increases even for incorrect cases, while early steps provide better separation. Right: Average degeneration score $D_K$; incorrect rollouts consistently exhibit higher values, with the gap increasing over time.
  • Figure 5: Performance of CoDE-Stop against different baselines across multiple benchmarks. CoDE-Stop consistently reduces inference cost while maintaining comparable performance to the baselines, making it Pareto optimal.
  • ...and 6 more figures