Table of Contents
Fetching ...

Offline Constrained RLHF with Multiple Preference Oracles

Brenden Latham, Mehrdad Moharrami

Abstract

We study offline constrained reinforcement learning from human feedback with multiple preference oracles. Motivated by applications that trade off performance with safety or fairness, we aim to maximize target population utility subject to a minimum protected group welfare constraint. From pairwise comparisons collected under a reference policy, we estimate oracle-specific rewards via maximum likelihood and analyze how statistical uncertainty propagates through the dual program. We cast the constrained objective as a KL-regularized Lagrangian whose primal optimizer is a Gibbs policy, reducing learning to a convex dual problem. We propose a dual-only algorithm that ensures high-probability constraint satisfaction and provide the first finite-sample performance guarantees for offline constrained preference learning. Finally, we extend our theoretical analysis to accommodate multiple constraints and general f-divergence regularization.

Offline Constrained RLHF with Multiple Preference Oracles

Abstract

We study offline constrained reinforcement learning from human feedback with multiple preference oracles. Motivated by applications that trade off performance with safety or fairness, we aim to maximize target population utility subject to a minimum protected group welfare constraint. From pairwise comparisons collected under a reference policy, we estimate oracle-specific rewards via maximum likelihood and analyze how statistical uncertainty propagates through the dual program. We cast the constrained objective as a KL-regularized Lagrangian whose primal optimizer is a Gibbs policy, reducing learning to a convex dual problem. We propose a dual-only algorithm that ensures high-probability constraint satisfaction and provide the first finite-sample performance guarantees for offline constrained preference learning. Finally, we extend our theoretical analysis to accommodate multiple constraints and general f-divergence regularization.

Paper Structure

This paper contains 22 sections, 8 theorems, 52 equations, 3 figures, 2 tables, 1 algorithm.

Key Result

Corollary 1

Let $\tilde{\pi}$ be the greedy policy w.r.t.$\widehat{\theta}_2$, i.e.,$\tilde{\pi}(a|x)=\mathbf{1}\{a\in\arg\max_{a'}\langle \widehat{\theta}_2,\phi(x,a')\rangle\}.$ With probability at least $1-\delta$, Hence, if the right-hand side is strictly larger than $J_{\min}$, Assumption assump:slater holds with slack $\blacktriangleleft$$\blacktriangleleft$

Figures (3)

  • Figure 1:
  • Figure 2: Policy shift
  • Figure 3: Performance vs.dataset size ($N$) with $T=1000$ over three settings of $w$. Top: Primal objective sub-optimality. Bottom: Constraint violation.

Theorems & Definitions (17)

  • Corollary 1
  • Definition 1: Policy Improvement Oracle xiong2024iterative
  • Lemma 1
  • Lemma 2
  • Lemma 3
  • Proposition 1
  • Theorem 1
  • Theorem 2
  • Proposition 2
  • proof
  • ...and 7 more