Table of Contents
Fetching ...

Optimizing Neurorobot Policy under Limited Demonstration Data through Preference Regret

Viet Dung Nguyen, Yuhang Song, Anh Nguyen, Jamison Heard, Reynold Bailey, Alexander Ororbia

Abstract

Robot reinforcement learning from demonstrations (RLfD) assumes that expert data is abundant; this is usually unrealistic in the real world given data scarcity as well as high collection cost. Furthermore, imitation learning algorithms assume that the data is independently and identically distributed, which ultimately results in poorer performance as gradual errors emerge and compound within test-time trajectories. We address these issues by introducing the "master your own expertise" (MYOE) framework, a self-imitation framework that enables robotic agents to learn complex behaviors from limited demonstration data samples. Inspired by human perception and action, we propose and design what we call the queryable mixture-of-preferences state space model (QMoP-SSM), which estimates the desired goal at every time step. These desired goals are used in computing the "preference regret", which is used to optimize the robot control policy. Our experiments demonstrate the robustness, adaptability, and out-of-sample performance of our agent compared to other state-of-the-art RLfD schemes. The GitHub repository that supports this work can be found at: https://github.com/rxng8/neurorobot-preference-regret-learning.

Optimizing Neurorobot Policy under Limited Demonstration Data through Preference Regret

Abstract

Robot reinforcement learning from demonstrations (RLfD) assumes that expert data is abundant; this is usually unrealistic in the real world given data scarcity as well as high collection cost. Furthermore, imitation learning algorithms assume that the data is independently and identically distributed, which ultimately results in poorer performance as gradual errors emerge and compound within test-time trajectories. We address these issues by introducing the "master your own expertise" (MYOE) framework, a self-imitation framework that enables robotic agents to learn complex behaviors from limited demonstration data samples. Inspired by human perception and action, we propose and design what we call the queryable mixture-of-preferences state space model (QMoP-SSM), which estimates the desired goal at every time step. These desired goals are used in computing the "preference regret", which is used to optimize the robot control policy. Our experiments demonstrate the robustness, adaptability, and out-of-sample performance of our agent compared to other state-of-the-art RLfD schemes. The GitHub repository that supports this work can be found at: https://github.com/rxng8/neurorobot-preference-regret-learning.

Paper Structure

This paper contains 7 sections, 1 theorem, 12 equations, 4 figures, 4 tables.

Key Result

Lemma 1

Minimizing preference regret as an internal reward in the advantage computation guides the agent toward preferred trajectories while maintaining the ability to maximize the final reward when preferences are sub-optimal. $\blacktriangleleft$$\blacktriangleleft$

Figures (4)

  • Figure 1: Our Proposed Agent Framework. The agent learns internal representations via encoders, the QMoP-SSM, and a decoder. Imagined future states and preferences guide both policy learning and value estimation.
  • Figure 2: QMoP-SSM Learning Architecture. QMoP separates latent learning into two interconnected parallel processes: "representation" and "preference" learning. While the model predicts future latent given previous actions, future preference state trajectories are guided by provided and learned goals.
  • Figure 3: Cumulative reward ($y$-axis) across $1$ million interaction/training steps ($x$-axis) for different agents.
  • Figure 4: Our proposed MYOE solving the "reach" task when integrated into the "7bot" robot (top) and our agent model solving the "block picking" task when integrated into the PX100 robot (bottom).

Theorems & Definitions (2)

  • Lemma 1
  • proof