Table of Contents
Fetching ...

DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

Hongbo Jin, Rongpeng Zhu, Zhongjing Du, Xu Jiang, Jingqi Tian, Qiaoman Zhang, Jiayu Ding

Abstract

Reinforcement learning is crucial for aligning large language models to perform complex reasoning tasks. However, current algorithms such as Group Relative Policy Optimization suffer from coarse grained, sequence level credit assignment, which severely struggles to isolate pivotal reasoning steps within long Chain of Thought generations. Furthermore, the standard unbounded Kullback Leibler divergence penalty induces severe gradient instability and mode seeking conservatism, ultimately stifling the discovery of novel reasoning trajectories. To overcome these limitations, we introduce Distribution Guided Policy Optimization, a novel critic free reinforcement learning framework that reinterprets distribution deviation as a guiding signal rather than a rigid penalty.

DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

Abstract

Reinforcement learning is crucial for aligning large language models to perform complex reasoning tasks. However, current algorithms such as Group Relative Policy Optimization suffer from coarse grained, sequence level credit assignment, which severely struggles to isolate pivotal reasoning steps within long Chain of Thought generations. Furthermore, the standard unbounded Kullback Leibler divergence penalty induces severe gradient instability and mode seeking conservatism, ultimately stifling the discovery of novel reasoning trajectories. To overcome these limitations, we introduce Distribution Guided Policy Optimization, a novel critic free reinforcement learning framework that reinterprets distribution deviation as a guiding signal rather than a rigid penalty.

Paper Structure

This paper contains 27 sections, 16 equations, 4 figures, 6 tables.

Figures (4)

  • Figure 1: Conceptual comparison between standard GRPO and our proposed DGPO. While GRPO uniformly broadcasts a coarse-grained sequence-level advantage and imposes an unbounded Reverse KL penalty that stifles exploration , DGPO dynamically reallocates advantages to individual tokens.
  • Figure 2: The computational pipeline of Distribution-Guided Policy Optimization (DGPO).
  • Figure 3: Validation accuracy on the AIME benchmark during training (Qwen2.5-32B-Base). The learning curves illustrate the progression of reasoning performance over global training steps. We report both the average Pass@1 and consensus accuracy.
  • Figure 4: Qualitative visualization of the token-level credit reallocation. The background color intensity corresponds to the magnitude of the redistributed importance weight $w_{i,t}$.