Table of Contents
Fetching ...

Learning Task-Invariant Properties via Dreamer: Enabling Efficient Policy Transfer for Quadruped Robots

Junyang Liang, Yuxuan Liu, Yabin Chang, Junfan Lin, Junkai Ji, Hui Li, Changxin Huang, Jianqiang Li

Abstract

Achieving quadruped robot locomotion across diverse and dynamic terrains presents significant challenges, primarily due to the discrepancies between simulation environments and real-world conditions. Traditional sim-to-real transfer methods often rely on manual feature design or costly real-world fine-tuning. To address these limitations, this paper proposes the DreamTIP framework, which incorporates Task-Invariant Properties learning within the Dreamer world model architecture to enhance sim-to-real transfer capabilities. Guided by large language models, DreamTIP identifies and leverages Task-Invariant Properties, such as contact stability and terrain clearance, which exhibit robustness to dynamic variations and strong transferability across tasks. These properties are integrated into the world model as auxiliary prediction targets, enabling the policy to learn representations that are insensitive to underlying dynamic changes. Furthermore, an efficient adaptation strategy is designed, employing a mixed replay buffer and regularization constraints to rapidly calibrate to real-world dynamics while effectively mitigating representation collapse and catastrophic forgetting. Extensive experiments on complex terrains, including Stair, Climb, Tilt, and Crawl, demonstrate that DreamTIP significantly outperforms state-of-the-art baselines in both simulated and real-world environments. Our method achieves an average performance improvement of 28.1% across eight distinct simulated transfer tasks. In the real-world Climb task, the baseline method achieved only a 10\ success rate, whereas our method attained a 100% success rate. These results indicate that incorporating Task-Invariant Properties into Dreamer learning offers a novel solution for achieving robust and transferable robot locomotion.

Learning Task-Invariant Properties via Dreamer: Enabling Efficient Policy Transfer for Quadruped Robots

Abstract

Achieving quadruped robot locomotion across diverse and dynamic terrains presents significant challenges, primarily due to the discrepancies between simulation environments and real-world conditions. Traditional sim-to-real transfer methods often rely on manual feature design or costly real-world fine-tuning. To address these limitations, this paper proposes the DreamTIP framework, which incorporates Task-Invariant Properties learning within the Dreamer world model architecture to enhance sim-to-real transfer capabilities. Guided by large language models, DreamTIP identifies and leverages Task-Invariant Properties, such as contact stability and terrain clearance, which exhibit robustness to dynamic variations and strong transferability across tasks. These properties are integrated into the world model as auxiliary prediction targets, enabling the policy to learn representations that are insensitive to underlying dynamic changes. Furthermore, an efficient adaptation strategy is designed, employing a mixed replay buffer and regularization constraints to rapidly calibrate to real-world dynamics while effectively mitigating representation collapse and catastrophic forgetting. Extensive experiments on complex terrains, including Stair, Climb, Tilt, and Crawl, demonstrate that DreamTIP significantly outperforms state-of-the-art baselines in both simulated and real-world environments. Our method achieves an average performance improvement of 28.1% across eight distinct simulated transfer tasks. In the real-world Climb task, the baseline method achieved only a 10\ success rate, whereas our method attained a 100% success rate. These results indicate that incorporating Task-Invariant Properties into Dreamer learning offers a novel solution for achieving robust and transferable robot locomotion.

Paper Structure

This paper contains 18 sections, 5 equations, 6 figures, 2 tables.

Figures (6)

  • Figure A1: Different Dreamer learning paradigms. The original Dreamer learns environment dynamics by reconstructing observations. DreamTIP, building upon this, also incorporates Task-Invariant Properties designed by an LLM to reduce its over-reliance on underlying dynamics parameters.
  • Figure C1: Overview of the proposed framework. The framework consists of two stages: In the first stage, DreamTIP is employed in a simulation environment to learn Task-Invariant Properties; In the second stage, it adapts to the dynamics distribution in the physical environment with only a few rollouts.
  • Figure D1: Performance comparison of various methods on eight transfer tasks in simulation. The evaluation metric is the average cumulative reward over 100 trajectories per task. Our method outperformed all other baselines across the board. The vertical axis represents the average trajectory reward, while the horizontal axis indicates the varying levels of task difficulty. The results are obtained through testing over 100 trajectories with 3 different random seeds.
  • Figure E1: Illustrations of terrain settings in simulation and real-world evaluation.
  • Figure E2: Performance comparison of various methods on the crawl task across simulated and real environments. Simulation environment (top, above gray dashed line) and real-world environment (bottom). Red line: obstacle height; Yellow dots: robot dog's traversal height at the obstacle. With the obstacle height set to 25 cm in both environments, the Baseline method encounters collisions with its head when passing through the obstacles, whereas our method traverses safely. This demonstrates the superior task transfer performance of our method, as well as the consistency in its sim-to-real effectiveness.
  • ...and 1 more figures