Table of Contents
Fetching ...

Think Twice Before You Write -- an Entropy-based Decoding Strategy to Enhance LLM Reasoning

Jiashu He, Meizhu Liu, Olaitan P Olaleye, Amit Agarwal, M. Avendi, Yassi Abbasi, Matthew Rowe, Hitesh Laxmichand Patel, Paul Li, Tao Sheng, Sujith Ravi, Dan Roth

Abstract

Decoding strategies play a central role in shaping the reasoning ability of large language models (LLMs). Traditional methods such as greedy decoding and beam search often suffer from error propagation, while sampling-based approaches introduce randomness without adequate robustness. Self-consistency improves reliability by aggregating multiple rollouts, but incurs significant computational overhead. We propose an entropy-guided decoding framework that introduces token-level adaptivity into generation. At each step, the model computes the entropy of the token distribution, identifies high-uncertainty positions, and selectively branches on these vulnerable points. A dynamic pool of partial rollouts is maintained and expanded until solutions are completed, concentrating computation where uncertainty is greatest and avoiding unnecessary exploration in confident regions. To enable efficient termination, we apply a rollout-level Entropy After </Think> (EAT) stopping criterion by performing entropy evaluation after the full reasoning trace, rather than incrementally at every step. Experiments on GSM8K, AMC2023, and their perturbed variants demonstrate that our method achieves consistently strong accuracy. Notably, on smaller LLMs, performance is comparable to GPT-5 while operating at a fraction of the cost.

Think Twice Before You Write -- an Entropy-based Decoding Strategy to Enhance LLM Reasoning

Abstract

Decoding strategies play a central role in shaping the reasoning ability of large language models (LLMs). Traditional methods such as greedy decoding and beam search often suffer from error propagation, while sampling-based approaches introduce randomness without adequate robustness. Self-consistency improves reliability by aggregating multiple rollouts, but incurs significant computational overhead. We propose an entropy-guided decoding framework that introduces token-level adaptivity into generation. At each step, the model computes the entropy of the token distribution, identifies high-uncertainty positions, and selectively branches on these vulnerable points. A dynamic pool of partial rollouts is maintained and expanded until solutions are completed, concentrating computation where uncertainty is greatest and avoiding unnecessary exploration in confident regions. To enable efficient termination, we apply a rollout-level Entropy After </Think> (EAT) stopping criterion by performing entropy evaluation after the full reasoning trace, rather than incrementally at every step. Experiments on GSM8K, AMC2023, and their perturbed variants demonstrate that our method achieves consistently strong accuracy. Notably, on smaller LLMs, performance is comparable to GPT-5 while operating at a fraction of the cost.

Paper Structure

This paper contains 27 sections, 1 equation, 16 figures, 8 tables.

Figures (16)

  • Figure 1: Performance on GSM8K, our Entropy Decoding approach enables the 8B Llama model to outperform the base GPT models with $\sim$33x less compute.
  • Figure 2: Different kinds of reasoning errors made by LLMs in complex tasks are due to one or a few wrong tokens which mislead the problem-solving direction.
  • Figure 3: An example of parallel thinking on vulnerable tokens. Here, "equilateral" is the vulnerable token.
  • Figure 4: Test accuracy on the perturbed GSM8K dataset under different generation budgets. Each x-axis unit represents 128 tokens. Results are shown for the Llama3.1-8B model using entropy-based and random approaches.
  • Figure 5: Consumption time (s) distribution of GPT5 model on the perturbed GSM8K dataset.
  • ...and 11 more figures