Table of Contents
Fetching ...

OPRIDE: Offline Preference-based Reinforcement Learning via In-Dataset Exploration

Yiqin Yang, Hao Hu, Yihuan Mao, Jin Zhang, Chengjie Wu, Yuhua Jiang, Xu Yang, Runpeng Xie, Yi Fan, Bo Liu, Yang Gao, Bo Xu, Chongjie Zhang

Abstract

Preference-based reinforcement learning (PbRL) can help avoid sophisticated reward designs and align better with human intentions, showing great promise in various real-world applications. However, obtaining human feedback for preferences can be expensive and time-consuming, which forms a strong barrier for PbRL. In this work, we address the problem of low query efficiency in offline PbRL, pinpointing two primary reasons: inefficient exploration and overoptimization of learned reward functions. In response to these challenges, we propose a novel algorithm, \textbf{O}ffline \textbf{P}b\textbf{R}L via \textbf{I}n-\textbf{D}ataset \textbf{E}xploration (OPRIDE), designed to enhance the query efficiency of offline PbRL. OPRIDE consists of two key features: a principled exploration strategy that maximizes the informativeness of the queries and a discount scheduling mechanism aimed at mitigating overoptimization of the learned reward functions. Through empirical evaluations, we demonstrate that OPRIDE significantly outperforms prior methods, achieving strong performance with notably fewer queries. Moreover, we provide theoretical guarantees of the algorithm's efficiency. Experimental results across various locomotion, manipulation, and navigation tasks underscore the efficacy and versatility of our approach.

OPRIDE: Offline Preference-based Reinforcement Learning via In-Dataset Exploration

Abstract

Preference-based reinforcement learning (PbRL) can help avoid sophisticated reward designs and align better with human intentions, showing great promise in various real-world applications. However, obtaining human feedback for preferences can be expensive and time-consuming, which forms a strong barrier for PbRL. In this work, we address the problem of low query efficiency in offline PbRL, pinpointing two primary reasons: inefficient exploration and overoptimization of learned reward functions. In response to these challenges, we propose a novel algorithm, \textbf{O}ffline \textbf{P}b\textbf{R}L via \textbf{I}n-\textbf{D}ataset \textbf{E}xploration (OPRIDE), designed to enhance the query efficiency of offline PbRL. OPRIDE consists of two key features: a principled exploration strategy that maximizes the informativeness of the queries and a discount scheduling mechanism aimed at mitigating overoptimization of the learned reward functions. Through empirical evaluations, we demonstrate that OPRIDE significantly outperforms prior methods, achieving strong performance with notably fewer queries. Moreover, we provide theoretical guarantees of the algorithm's efficiency. Experimental results across various locomotion, manipulation, and navigation tasks underscore the efficacy and versatility of our approach.

Paper Structure

This paper contains 38 sections, 11 theorems, 46 equations, 2 figures, 16 tables, 4 algorithms.

Key Result

Theorem 4

Let $\beta_k = c_1\sqrt{\log (K |\Delta \mathcal{R}|)/K}$ and $\epsilon = c_2\sqrt{\log(N|\Pi||\mathcal{Q}|)/N}$, where $c_1,c_2$ are universal constants. Then the expected suboptimality of $\bar{\pi}$ from Algorithm alg: PbRL is upper bounded by where $\kappa$ is the degree of non-linearity of the link function $\sigma$, $C^\dagger$ is the coverage coefficient in Definition def:coverage, $N$ is

Figures (2)

  • Figure 1: The procedure of OPRIDE consists of two phases. In the first offline phase, we select query based on exploration mechanism. The blue circles $\bullet$ and red triangles $\blacktriangle$ represent the value estimation $V_{\psi_1}$ and $V_{\psi_2}$, respectively. In the second stage, we first learn an reward function based on the preference dataset and then annotate the reward-free dataset. Next, we adjust the discount factor to reduce the impact of noise in the reward learning.
  • Figure 2: Performance of offline preference-based RL algorithms with various queries. OPRIDE achieves a better query efficiency across tasks and number of queries.

Theorems & Definitions (16)

  • Definition 1: Bellman shift coefficient xie2021bellman
  • Definition 2: Eluder Dimension russo2013eluder
  • Definition 3: Generalized Linear Preference Model
  • Theorem 4
  • Lemma 5
  • Theorem 6: Restatement of Theorem \ref{['theorem:1']}
  • Remark 7
  • Remark 8
  • Theorem 9: Performance Guarantees with Pure Offline Queries
  • Theorem 10: Performance Guarantees with Pure Offline Queries
  • ...and 6 more