Table of Contents
Fetching ...

Learning Pareto Set for Multi-Objective Continuous Robot Control

Tianye Shu, Ke Shang, Cheng Gong, Yang Nan, Hisao Ishibuchi

TL;DR

A simple and resource-efficient MORL algorithm that learns a continuous representation of the Pareto set in a high-dimensional policy parameter space using a single hypernet that can directly generate various well-trained policy networks for different user preferences is proposed.

Abstract

For a control problem with multiple conflicting objectives, there exists a set of Pareto-optimal policies called the Pareto set instead of a single optimal policy. When a multi-objective control problem is continuous and complex, traditional multi-objective reinforcement learning (MORL) algorithms search for many Pareto-optimal deep policies to approximate the Pareto set, which is quite resource-consuming. In this paper, we propose a simple and resource-efficient MORL algorithm that learns a continuous representation of the Pareto set in a high-dimensional policy parameter space using a single hypernet. The learned hypernet can directly generate various well-trained policy networks for different user preferences. We compare our method with two state-of-the-art MORL algorithms on seven multi-objective continuous robot control problems. Experimental results show that our method achieves the best overall performance with the least training parameters. An interesting observation is that the Pareto set is well approximated by a curved line or surface in a high-dimensional parameter space. This observation will provide insight for researchers to design new MORL algorithms.

Learning Pareto Set for Multi-Objective Continuous Robot Control

TL;DR

A simple and resource-efficient MORL algorithm that learns a continuous representation of the Pareto set in a high-dimensional policy parameter space using a single hypernet that can directly generate various well-trained policy networks for different user preferences is proposed.

Abstract

For a control problem with multiple conflicting objectives, there exists a set of Pareto-optimal policies called the Pareto set instead of a single optimal policy. When a multi-objective control problem is continuous and complex, traditional multi-objective reinforcement learning (MORL) algorithms search for many Pareto-optimal deep policies to approximate the Pareto set, which is quite resource-consuming. In this paper, we propose a simple and resource-efficient MORL algorithm that learns a continuous representation of the Pareto set in a high-dimensional policy parameter space using a single hypernet. The learned hypernet can directly generate various well-trained policy networks for different user preferences. We compare our method with two state-of-the-art MORL algorithms on seven multi-objective continuous robot control problems. Experimental results show that our method achieves the best overall performance with the least training parameters. An interesting observation is that the Pareto set is well approximated by a curved line or surface in a high-dimensional parameter space. This observation will provide insight for researchers to design new MORL algorithms.

Paper Structure

This paper contains 21 sections, 7 equations, 8 figures, 3 tables, 1 algorithm.

Figures (8)

  • Figure 1: Basic idea of Hyper-MORL to approximate the Pareto set in the $n$-dimensional parameter space.
  • Figure 2: Visualization of all Pareto-optimal policies obtained by each MORL algorithm on each of six two-objective problems (a)-(f) and one three-objective problem (g). The total number of required parameters to represent these policies is shown in the parenthesis for each algorithm. A specific run with the median HV value among nine runs is shown for each algorithm.
  • Figure 3: Explanation of the poor performance of Hyper-MORL on MO-HalfCheetah-v2 in Figure \ref{['fig:visualization_2obj']} (b). The input preferences (Left) and their corresponding Pareto-optimal policies (Right) are plotted in two colors. The preferences in yellow (blue) correspond to the policies in yellow (blue). Two similar preferences $\bm{\omega}_{A}$ and $\bm{\omega}_{B}$ lead to clearly two different policies in the right figure.
  • Figure 4: The average training time required for each algorithm on each test problem. The termination condition is shown in Table \ref{['tab:termination_condition']}.
  • Figure 5: Visualization of the Pareto-optimal policies obtained by Hyper-MORL for MO-Walker2d-v2 problem in the parameter space (b) and objective space (c), and their corresponding input to the hypernet in the preference space (a). Each policy is plotted with the same color as its corresponding input preference. t-SNE is used for visualization in the 11,150-dimensional parameter space.
  • ...and 3 more figures