Table of Contents
Fetching ...

ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training

Rui Ai, Yu Pan, David Simchi-Levi, Chonghuan Wang

Abstract

In user-agent interaction scenarios such as recommendation, brainstorming, and code suggestion, Large Language Models (LLMs) often generate sets of candidate recommendations where the objective is to maximize the collective utility of the entire set rather than individual candidates independently. However, existing reinforcement learning post-training paradigms, such as Group Relative Policy Optimization (GRPO), typically assign the same set-level scalar reward to every candidate in the set. This leads to noisy training signals where poor candidates free-ride on the high reward produced by a single strong peer, resulting in suboptimal exploration. To address this, we propose Shapley-Enhanced GRPO (ShapE-GRPO). By leveraging the permutation-invariant nature of set-level utility, we derive a Shapley-enhanced formulation from cooperative game theory to decompose set-level rewards into granular, candidate-specific signals. We show that our formulation preserves the fundamental axioms of the Shapley value while remaining computationally efficient with polynomial-time complexity. Empirically, ShapE-GRPO consistently outperforms standard GRPO across diverse datasets with accelerated convergence during training.

ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training

Abstract

In user-agent interaction scenarios such as recommendation, brainstorming, and code suggestion, Large Language Models (LLMs) often generate sets of candidate recommendations where the objective is to maximize the collective utility of the entire set rather than individual candidates independently. However, existing reinforcement learning post-training paradigms, such as Group Relative Policy Optimization (GRPO), typically assign the same set-level scalar reward to every candidate in the set. This leads to noisy training signals where poor candidates free-ride on the high reward produced by a single strong peer, resulting in suboptimal exploration. To address this, we propose Shapley-Enhanced GRPO (ShapE-GRPO). By leveraging the permutation-invariant nature of set-level utility, we derive a Shapley-enhanced formulation from cooperative game theory to decompose set-level rewards into granular, candidate-specific signals. We show that our formulation preserves the fundamental axioms of the Shapley value while remaining computationally efficient with polynomial-time complexity. Empirically, ShapE-GRPO consistently outperforms standard GRPO across diverse datasets with accelerated convergence during training.

Paper Structure

This paper contains 46 sections, 4 theorems, 32 equations, 4 figures, 3 tables.

Key Result

Proposition 4.1

Assuming that the candidates $\{c_i^1,\dots,c_i^K\}$ in the output $o_i$ have equal lengths, the ShapE-GRPO advantage is a reweighting of the GRPO advantage. Specifically,

Figures (4)

  • Figure 1: When the user considers the ratings for the three suggestions to be 5.0, 4.0, and 3.0, respectively, our ShapE-GRPO performs candidate-level (with broadcasting to token-level) reward allocation as above. However, GRPO only uses a single +5.0 sequence-level reward shared across tokens.
  • Figure 2: Distribution of candidate length CV for summarization.
  • Figure 3: Training curves on the Netflix dataset with user history. ShapE-GRPO converges faster and exhibits significantly more stable training dynamics compared with GRPO.
  • Figure 4: Response reward as a function of the number of candidates across different datasets using Qwen3-8b.

Theorems & Definitions (4)

  • Proposition 4.1
  • Proposition 4.2
  • Theorem 4.1
  • Corollary 4.1