Table of Contents
Fetching ...

ERPO: Token-Level Entropy-Regulated Policy Optimization for Large Reasoning Models

Song Yu, Li Li

Abstract

Reinforcement learning from verifiable rewards (RLVR) has significantly advanced the reasoning capabilities of large language models. However, standard Group Relative Policy Optimization (GRPO) typically assigns a uniform, sequence-level advantage to all tokens, thereby overlooking the intrinsic information heterogeneity along reasoning chains. We show that this coarse-grained credit assignment leads to premature entropy collapse and encourages the model to generate redundant, low-quality reasoning paths. Through systematic empirical analysis, we identify Critical Decision Pivots (CDPs): transient high-entropy states where the policy's trajectory is most sensitive to perturbations. These pivots represent the "forks in the road" where effective multi-path exploration is most crucial yet often suppressed by uniform advantage signals. Building on these insights, we propose Entropy-Regulated Policy Optimization (ERPO), which transitions the optimization focus from coarse sequences to fine-grained token dynamics. ERPO introduces three synergistic components: (i) Entropy-aware Gating, which adaptively amplifies exploration at CDPs to facilitate diverse path discovery; (ii) Bucket-based Implicit Normalization, which mitigates difficulty bias by aligning token progress windows; and (iii) Result-anchored Advantage Synthesis, which re-weights token-level signals via outcome-driven anchors. Extensive experiments on competitive mathematical benchmarks (e.g., MATH, AIME) demonstrate that ERPO significantly outperforms GRPO. Notably, ERPO not only boosts reasoning accuracy but also yields significantly more concise and robust derivation paths, establishing a new efficiency-accuracy frontier for large reasoning models.

ERPO: Token-Level Entropy-Regulated Policy Optimization for Large Reasoning Models

Abstract

Reinforcement learning from verifiable rewards (RLVR) has significantly advanced the reasoning capabilities of large language models. However, standard Group Relative Policy Optimization (GRPO) typically assigns a uniform, sequence-level advantage to all tokens, thereby overlooking the intrinsic information heterogeneity along reasoning chains. We show that this coarse-grained credit assignment leads to premature entropy collapse and encourages the model to generate redundant, low-quality reasoning paths. Through systematic empirical analysis, we identify Critical Decision Pivots (CDPs): transient high-entropy states where the policy's trajectory is most sensitive to perturbations. These pivots represent the "forks in the road" where effective multi-path exploration is most crucial yet often suppressed by uniform advantage signals. Building on these insights, we propose Entropy-Regulated Policy Optimization (ERPO), which transitions the optimization focus from coarse sequences to fine-grained token dynamics. ERPO introduces three synergistic components: (i) Entropy-aware Gating, which adaptively amplifies exploration at CDPs to facilitate diverse path discovery; (ii) Bucket-based Implicit Normalization, which mitigates difficulty bias by aligning token progress windows; and (iii) Result-anchored Advantage Synthesis, which re-weights token-level signals via outcome-driven anchors. Extensive experiments on competitive mathematical benchmarks (e.g., MATH, AIME) demonstrate that ERPO significantly outperforms GRPO. Notably, ERPO not only boosts reasoning accuracy but also yields significantly more concise and robust derivation paths, establishing a new efficiency-accuracy frontier for large reasoning models.

Paper Structure

This paper contains 21 sections, 16 equations, 4 figures, 2 tables, 1 algorithm.

Figures (4)

  • Figure 1: Token-level entropy distribution and its impact on performance based on Qwen2.5-3B. (a) The LLM exhibits high entropy at high-entropy states (red), indicating that the model is making logical branch decisions, and low entropy at the inference process (yellow), indicating that the model is performing deterministic execution steps. (b) We sampled 50 questions that the model could definitely answer correctly and randomly perturbed the top 5% of high-entropy tokens and the bottom 5% of low-entropy tokens for each sequence. Perturbing these high-entropy hubs resulted in a significant decrease in final accuracy $(p < 0.001)$, confirming their crucial role in inference.
  • Figure 2: Comparison of training efficiency and generalization performance between GRPO (Baseline) and ERPO (Ours) across three model scales (1.5B, 3B, 7B). Each row presents the sample accuracy (%) on AMC23, Minerva, AIME24, and AIME25 benchmarks, smoothed with EMA ($\alpha=0.2$).
  • Figure 3: Training dynamics of ERPO vs. GRPO. We visualize the (a) Reward, (b) Entropy, (c) Grad Norm, and (d) KL Divergence. Note that the Entropy axis uses a symlog scale to highlight the significant difference in the late training stage ($0.4$ vs. $0.05$), demonstrating that ERPO effectively prevents mode collapse. All curves are smoothed with EMA ($\alpha=0.12$) while raw data is shown in light colors.
  • Figure 4: Comprehensive efficiency and training dynamics analysis for Qwen2.5-7B. Top row: (a) compares reasoning conciseness at the best checkpoints; (b) evaluates computational overhead. Bottom row: (c) displays the stability of token generation length across four benchmarks. ERPO achieves superior performance with significantly more concise reasoning paths and comparable training time to GRPO.