Table of Contents
Fetching ...

Maximum Entropy Behavior Exploration for Sim2Real Zero-Shot Reinforcement Learning

Jiajun Hu, Nuria Armengol Urpi, Jin Cheng, Stelian Coros

Abstract

Zero-shot reinforcement learning (RL) algorithms aim to learn a family of policies from a reward-free dataset, and recover optimal policies for any reward function directly at test time. Naturally, the quality of the pretraining dataset determines the performance of the recovered policies across tasks. However, pre-collecting a relevant, diverse dataset without prior knowledge of the downstream tasks of interest remains a challenge. In this work, we study $\textit{online}$ zero-shot RL for quadrupedal control on real robotic systems, building upon the Forward-Backward (FB) algorithm. We observe that undirected exploration yields low-diversity data, leading to poor downstream performance and rendering policies impractical for direct hardware deployment. Therefore, we introduce FB-MEBE, an online zero-shot RL algorithm that combines an unsupervised behavior exploration strategy with a regularization critic. FB-MEBE promotes exploration by maximizing the entropy of the achieved behavior distribution. Additionally, a regularization critic shapes the recovered policies toward more natural and physically plausible behaviors. We empirically demonstrate that FB-MEBE achieves and improved performance compared to other exploration strategies in a range of simulated downstream tasks, and that it renders natural policies that can be seamlessly deployed to hardware without further finetuning. Videos and code available on our website.

Maximum Entropy Behavior Exploration for Sim2Real Zero-Shot Reinforcement Learning

Abstract

Zero-shot reinforcement learning (RL) algorithms aim to learn a family of policies from a reward-free dataset, and recover optimal policies for any reward function directly at test time. Naturally, the quality of the pretraining dataset determines the performance of the recovered policies across tasks. However, pre-collecting a relevant, diverse dataset without prior knowledge of the downstream tasks of interest remains a challenge. In this work, we study zero-shot RL for quadrupedal control on real robotic systems, building upon the Forward-Backward (FB) algorithm. We observe that undirected exploration yields low-diversity data, leading to poor downstream performance and rendering policies impractical for direct hardware deployment. Therefore, we introduce FB-MEBE, an online zero-shot RL algorithm that combines an unsupervised behavior exploration strategy with a regularization critic. FB-MEBE promotes exploration by maximizing the entropy of the achieved behavior distribution. Additionally, a regularization critic shapes the recovered policies toward more natural and physically plausible behaviors. We empirically demonstrate that FB-MEBE achieves and improved performance compared to other exploration strategies in a range of simulated downstream tasks, and that it renders natural policies that can be seamlessly deployed to hardware without further finetuning. Videos and code available on our website.

Paper Structure

This paper contains 39 sections, 25 equations, 9 figures, 8 tables, 1 algorithm.

Figures (9)

  • Figure 2: Limitations of undirected exploration: Standard FB algorithm with undirected exploration (FB) suffers from performance degradation and entropy stagnation (left, higher is better) and yields unnatural behaviors (right, lower is better). While introducing a behavior regularizer (FB-critic) leads to more plausible behaviors (right), it leads to severe performance degradation and further reduces behavior diversity (left).
  • Figure 3: Maximum Entropy Behavior Forward-Backward Exploration: FB-MEBE is an online zero-shot RL algorithm that collects data by acting with behaviors $z_{exp}$ that maximize the entropy of the achieved behavior distribution. Collected data is used to train policies with a regularized loss combining policy improvement objective based on the FB action value function ($Q_{FB}$) and a critic ($Q_{reg}$) trained on a behavior regularizer reward to induce meaningful locomotion patterns.
  • Figure 4: Left: Zero-shot performance averaged over 17 downstream velocity tracking tasks (top) and orientation tasks (bottom). We plot normalized return wrt the mean return of Fast-TD3 seo-2025. Results are averaged across 5 random seeds, with shaded regions indicating $\pm$ 1-standard deviation. Middle: Policy entropy on $\left[v_x, v_y, w_z \right]$ (top) and $\left[g_x, g_y, g_z\right]$ (bottom) measured at 300K training steps. (Higher is better). Right: Feet slippage measured at 300K training steps. (Lower is better).
  • Figure 5: Comparison of different methods under detailed locomotion command settings. Each group on the x-axis corresponds to a target velocity command $[v_x, v_y, \omega_z]$. The y-axis reports the mean episode return evaluated under the reward function (in \ref{['appendix:reward-function']}) associated with the corresponding target velocity. Results are averaged over 5 random seeds, and error bars denote $\pm$ 1-standard deviation.
  • Figure 6: Comparison of different methods under detailed orientation command settings. Each group on the x-axis corresponds to a target orientation command $[\text{Pitch}, \text{Roll}]$. We removed tasks for which all algorithms get rewards smaller than one from the graph.
  • ...and 4 more figures