Table of Contents
Fetching ...

HAD: Combining Hierarchical Diffusion with Metric-Decoupled RL for End-to-End Driving

Wenhao Yao, Xinglong Sun, Zhenxin Li, Shiyi Lan, Zi Wang, Jose M. Alvarez, Zuxuan Wu

Abstract

End-to-end planning has emerged as a dominant paradigm for autonomous driving, where recent models often adopt a scoring-selection framework to choose trajectories from a large set of candidates, with diffusion-based decoding showing strong promise. However, directly selecting from the entire candidate space remains difficult to optimize, and Gaussian perturbations used in diffusion often introduce unrealistic trajectories that complicate the denoising process. In addition, for training these models, reinforcement learning (RL) has shown promise, but existing end-to-end RL approaches typically rely on a single coupled reward without structured signals, limiting optimization effectiveness. To address these challenges, we propose HAD, an end-to-end planning framework with a Hierarchical Diffusion Policy that decomposes planning into a coarse-to-fine process. To improve trajectory generation, we introduce Structure-Preserved Trajectory Expansion, which produces realistic candidates while maintaining kinematic structure. For policy learning, we develop Metric-Decoupled Policy Optimization (MDPO) to enable structured RL optimization across multiple driving objectives. Extensive experiments show that HAD achieves new state-of-the-art performance on both NAVSIM and HUGSIM, outperforming prior arts by a huge margin: +2.3 EPDMS on NAVSIM and +4.9 Route Completion on HUGSIM.

HAD: Combining Hierarchical Diffusion with Metric-Decoupled RL for End-to-End Driving

Abstract

End-to-end planning has emerged as a dominant paradigm for autonomous driving, where recent models often adopt a scoring-selection framework to choose trajectories from a large set of candidates, with diffusion-based decoding showing strong promise. However, directly selecting from the entire candidate space remains difficult to optimize, and Gaussian perturbations used in diffusion often introduce unrealistic trajectories that complicate the denoising process. In addition, for training these models, reinforcement learning (RL) has shown promise, but existing end-to-end RL approaches typically rely on a single coupled reward without structured signals, limiting optimization effectiveness. To address these challenges, we propose HAD, an end-to-end planning framework with a Hierarchical Diffusion Policy that decomposes planning into a coarse-to-fine process. To improve trajectory generation, we introduce Structure-Preserved Trajectory Expansion, which produces realistic candidates while maintaining kinematic structure. For policy learning, we develop Metric-Decoupled Policy Optimization (MDPO) to enable structured RL optimization across multiple driving objectives. Extensive experiments show that HAD achieves new state-of-the-art performance on both NAVSIM and HUGSIM, outperforming prior arts by a huge margin: +2.3 EPDMS on NAVSIM and +4.9 Route Completion on HUGSIM.

Paper Structure

This paper contains 23 sections, 15 equations, 8 figures, 11 tables, 1 algorithm.

Figures (8)

  • Figure 1: Comparison with existing end-to-end planning methods. Prior approaches search the entire driving space and use a single coupled reward from online simulation. Our method narrows the search via Hierarchical Diffusion Policy and approximates metric-decoupled rewards through offline retrieval.
  • Figure 2: Overview of HAD. The Hierarchical Diffusion Policy decomposes planning into Driving Intention Establishment and Local Trajectory Refinement. MDPO provides decoupled, structured optimization signals for training.
  • Figure 3: Comparison between simulation-based trajectory evaluation and our proposed Offline Reward Retrieval scheme.
  • Figure 4: Ablation on different trajectory evaluation metrics in hierarchical denoising stages.
  • Figure 4: Illustration of different trajectory expansion algorithms. Directly adding random noise is harmful to trajectory kinematic structure, expanding trajectory in the Cartesian space (XY Expand) suffer from insufficient exploration on local region, and expansion in the polar space (Polar Expand) leads to comprehensive exploration.
  • ...and 3 more figures