Table of Contents
Fetching ...

Stabilizing Rubric Integration Training via Decoupled Advantage Normalization

Zelin Tan, Zhouliang Yu, Bohan Lin, Zijie Geng, Hejia Geng, Yudong Zhang, Mulei Zhang, Yang Chen, Shuyue Hu, Zhenfei Yin, Chen Zhang, Lei Bai

Abstract

We propose Process-Aware Policy Optimization (PAPO), a method that integrates process-level evaluation into Group Relative Policy Optimization (GRPO) through decoupled advantage normalization, to address two limitations of existing reward designs. Outcome reward models (ORM) evaluate only final-answer correctness, treating all correct responses identically regardless of reasoning quality, and gradually lose the advantage signal as groups become uniformly correct. Process reward models (PRM) offer richer supervision, but directly using PRM scores causes reward hacking, where models exploit verbosity to inflate scores while accuracy collapses. PAPO resolves both by composing the advantage from an outcome component Aout, derived from ORM and normalized over all responses, and a process component Aproc, derived from a rubric-based PRM and normalized exclusively among correct responses. This decoupled design ensures that Aout anchors training on correctness while Aproc differentiates reasoning quality without distorting the outcome signal. Experiments across multiple model scales and six benchmarks demonstrate that PAPO consistently outperforms ORM, reaching 51.3% vs.\ 46.3% on OlympiadBench while continuing to improve as ORM plateaus and declines.

Stabilizing Rubric Integration Training via Decoupled Advantage Normalization

Abstract

We propose Process-Aware Policy Optimization (PAPO), a method that integrates process-level evaluation into Group Relative Policy Optimization (GRPO) through decoupled advantage normalization, to address two limitations of existing reward designs. Outcome reward models (ORM) evaluate only final-answer correctness, treating all correct responses identically regardless of reasoning quality, and gradually lose the advantage signal as groups become uniformly correct. Process reward models (PRM) offer richer supervision, but directly using PRM scores causes reward hacking, where models exploit verbosity to inflate scores while accuracy collapses. PAPO resolves both by composing the advantage from an outcome component Aout, derived from ORM and normalized over all responses, and a process component Aproc, derived from a rubric-based PRM and normalized exclusively among correct responses. This decoupled design ensures that Aout anchors training on correctness while Aproc differentiates reasoning quality without distorting the outcome signal. Experiments across multiple model scales and six benchmarks demonstrate that PAPO consistently outperforms ORM, reaching 51.3% vs.\ 46.3% on OlympiadBench while continuing to improve as ORM plateaus and declines.

Paper Structure

This paper contains 64 sections, 12 equations, 9 figures, 6 tables.

Figures (9)

  • Figure 1: (a) ORM (blue) plateaus and declines after step 750 due to signal exhaustion; PRM (red, dashed) collapses via reward hacking; ORM$\times$PRM (purple, dash-dotted) tracks ORM closely without exceeding it; PAPO (green) continues improving throughout training, reaching 51.3%. (b) Comparison across AIME 2024/2025, OlympiadBench, and their average. Naive multiplicative combination (ORM$\times$PRM) barely improves over ORM, while PAPO's decoupled normalization yields substantial gains on all benchmarks.
  • Figure 2: Reward signal analysis. (a) Training reward: PRM reward climbs to 1.0 (perfect score gaming); ORM$\times$PRM stays moderate. (b) Response length: PRM generates increasingly verbose responses; ORM$\times$PRM shows moderate length increase (up to $\sim$1700 tokens). (c) OlympiadBench accuracy: PRM collapses after step 600; ORM$\times$PRM tracks ORM but fails to exceed it, showing that naive signal combination does not resolve signal exhaustion.
  • Figure 3: Overview of PAPO. Given a prompt, the policy generates $G$ responses. Each response is evaluated by two reward signals: an outcome reward (ORM, binary correctness) and a process reward (PRM, rubric-based quality, only for correct responses). The advantage is computed through decoupled normalization: $A_{\text{out}}$ is normalized over all responses via standard GRPO, while $A_{\text{proc}}$ is normalized exclusively among correct responses (correct-subset normalization). The combined advantage $A_{\text{total}}= A_{\text{out}}+ A_{\text{proc}}$ provides both correctness direction and quality differentiation.
  • Figure 4: Signal quality comparison on Qwen2.5-7B. (a) Zero-advantage ratio: ORM's sparsity grows to 69% while PAPO maintains 44%. (b) Advantage standard deviation, reflecting gradient signal strength. (c) Positive-advantage ratio, reflecting reinforcement density.
  • Figure 5: Training curves (avg@4) across two model scales and three benchmarks. Top row: Qwen2.5-3B; bottom row: Qwen2.5-7B. PAPO consistently outperforms ORM throughout training on both scales, with gains widening in later stages as ORM's signal exhaustion worsens. The pattern is consistent across OlympiadBench (competition math), MATH-500 (standard math), and HumanEval (code generation). Qwen2.5-14B results are reported in Table \ref{['tab:main_results']}.
  • ...and 4 more figures