Table of Contents
Fetching ...

Probabilistic Subgoal Representations for Hierarchical Reinforcement learning

Vivienne Huiling Wang, Tinghuai Wang, Wenyan Yang, Joni-Kristian Kämäräinen, Joni Pajarinen

TL;DR

This work tackles the challenge of subgoal representation in goal-conditioned hierarchical reinforcement learning by moving from deterministic mappings to a probabilistic subgoal space. It introduces HLPS, a Gaussian Process-based latent subgoal model with a learnable kernel that captures uncertainty and long-range state correlations, enabling an adaptive memory of planning steps. A novel learning objective and an online inference scheme based on state-space GP/Kalman filtering integrate subgoal learning with hierarchical policies, achieving improved stability, sample efficiency, and robustness, especially in stochastic and high-dimensional settings. Empirical results across MuJoCo tasks demonstrate superior performance and transferability of both the subgoal representations and low-level policies, highlighting practical impact for scalable HRL in diverse environments.

Abstract

In goal-conditioned hierarchical reinforcement learning (HRL), a high-level policy specifies a subgoal for the low-level policy to reach. Effective HRL hinges on a suitable subgoal represen tation function, abstracting state space into latent subgoal space and inducing varied low-level behaviors. Existing methods adopt a subgoal representation that provides a deterministic mapping from state space to latent subgoal space. Instead, this paper utilizes Gaussian Processes (GPs) for the first probabilistic subgoal representation. Our method employs a GP prior on the latent subgoal space to learn a posterior distribution over the subgoal representation functions while exploiting the long-range correlation in the state space through learnable kernels. This enables an adaptive memory that integrates long-range subgoal information from prior planning steps allowing to cope with stochastic uncertainties. Furthermore, we propose a novel learning objective to facilitate the simultaneous learning of probabilistic subgoal representations and policies within a unified framework. In experiments, our approach outperforms state-of-the-art baselines in standard benchmarks but also in environments with stochastic elements and under diverse reward conditions. Additionally, our model shows promising capabilities in transferring low-level policies across different tasks.

Probabilistic Subgoal Representations for Hierarchical Reinforcement learning

TL;DR

This work tackles the challenge of subgoal representation in goal-conditioned hierarchical reinforcement learning by moving from deterministic mappings to a probabilistic subgoal space. It introduces HLPS, a Gaussian Process-based latent subgoal model with a learnable kernel that captures uncertainty and long-range state correlations, enabling an adaptive memory of planning steps. A novel learning objective and an online inference scheme based on state-space GP/Kalman filtering integrate subgoal learning with hierarchical policies, achieving improved stability, sample efficiency, and robustness, especially in stochastic and high-dimensional settings. Empirical results across MuJoCo tasks demonstrate superior performance and transferability of both the subgoal representations and low-level policies, highlighting practical impact for scalable HRL in diverse environments.

Abstract

In goal-conditioned hierarchical reinforcement learning (HRL), a high-level policy specifies a subgoal for the low-level policy to reach. Effective HRL hinges on a suitable subgoal represen tation function, abstracting state space into latent subgoal space and inducing varied low-level behaviors. Existing methods adopt a subgoal representation that provides a deterministic mapping from state space to latent subgoal space. Instead, this paper utilizes Gaussian Processes (GPs) for the first probabilistic subgoal representation. Our method employs a GP prior on the latent subgoal space to learn a posterior distribution over the subgoal representation functions while exploiting the long-range correlation in the state space through learnable kernels. This enables an adaptive memory that integrates long-range subgoal information from prior planning steps allowing to cope with stochastic uncertainties. Furthermore, we propose a novel learning objective to facilitate the simultaneous learning of probabilistic subgoal representations and policies within a unified framework. In experiments, our approach outperforms state-of-the-art baselines in standard benchmarks but also in environments with stochastic elements and under diverse reward conditions. Additionally, our model shows promising capabilities in transferring low-level policies across different tasks.

Paper Structure

This paper contains 26 sections, 14 equations, 11 figures, 3 tables, 1 algorithm.

Figures (11)

  • Figure 1: A schematic illustration of the hierarchical policy execution. One high-level step corresponds to k low-level steps. The negative Euclidean distance in the latent space provides intrinsic rewards for the low-level policy.
  • Figure 2: The representation function consists of an encoding layer and a latent GP layer. Taking as input the state $\mathbf{s}$, the encoding layer comprises a neural network to generate an intermediate latent space representation $\mathbf{f}$, which will be transformed by the GP layer to produce the final subgoal representation $\mathbf{z}$.
  • Figure 3: Environments used in our experiments.
  • Figure 4: Learning curves of our method and baselines in stochastic environments, with sparse (rows 1 and 2) or dense (row 3) external rewards, and with (rows 2 and 3) or without top-down image observations. Each curve and its shaded region represent the average success rate and 95% confidence interval respectively, averaged over 10 independent trials.
  • Figure 5: Learning curves of our method and baselines in robotic arm environments 7-DOF Reacher and 7-DOF Pusher, with sparse external rewards.
  • ...and 6 more figures