Table of Contents
Fetching ...

Rainbow-DemoRL: Combining Improvements in Demonstration-Augmented Reinforcement Learning

Dwait Bhatt, Shih-Chieh Chou, Nikolay Atanasov

Abstract

Several approaches have been proposed to improve the sample efficiency of online reinforcement learning (RL) by leveraging demonstrations collected offline. The offline data can be used directly as transitions to optimize RL objectives, or offline policy and value functions can first be learned from the data and then used for online finetuning or to provide reference actions. While each of these strategies has shown compelling results, it is unclear which method has the most impact on sample efficiency, whether these approaches can be combined, and if there are cumulative benefits. We classify existing demonstration-augmented RL approaches into three categories and perform an extensive empirical study of their strengths, weaknesses, and combinations to isolate the contribution of each strategy and determine effective hybrid combinations for sample-efficient online RL. Our analysis reveals that directly reusing offline data and initializing with behavior cloning consistently outperform more complex offline RL pretraining methods for improving online sample efficiency.

Rainbow-DemoRL: Combining Improvements in Demonstration-Augmented Reinforcement Learning

Abstract

Several approaches have been proposed to improve the sample efficiency of online reinforcement learning (RL) by leveraging demonstrations collected offline. The offline data can be used directly as transitions to optimize RL objectives, or offline policy and value functions can first be learned from the data and then used for online finetuning or to provide reference actions. While each of these strategies has shown compelling results, it is unclear which method has the most impact on sample efficiency, whether these approaches can be combined, and if there are cumulative benefits. We classify existing demonstration-augmented RL approaches into three categories and perform an extensive empirical study of their strengths, weaknesses, and combinations to isolate the contribution of each strategy and determine effective hybrid combinations for sample-efficient online RL. Our analysis reveals that directly reusing offline data and initializing with behavior cloning consistently outperform more complex offline RL pretraining methods for improving online sample efficiency.

Paper Structure

This paper contains 21 sections, 11 equations, 9 figures, 1 table.

Figures (9)

  • Figure 2: Three approaches for combining offline components with online RL. Strategy A samples data from $\mathcal{D}_\text{off}$ along with the online RL buffer. Strategy B uses pretrained $Q_\text{off}$ and $\pi_\text{off}$ and finetunes them with online experience. Strategy C uses actions from an offline policy as reference to generate a mixed action $a_\text{mix}$.
  • Figure 3: Tabletop manipulation tasks with Panda and xArm6 robots in the ManiSkill simulator maniskill3.
  • Figure 4: Comparison of SAC vs TD3 success rates for Strategy A and C approaches.
  • Figure 5: Comparing Sample Efficiency Improvement (SEI) scores for single strategy approaches.
  • Figure 6: Comparison of Sample Efficiency Improvement (SEI) scores for the top ten hybrid variants across different strategy combinations.
  • ...and 4 more figures