Table of Contents
Fetching ...

DreamerAD: Efficient Reinforcement Learning via Latent World Model for Autonomous Driving

Pengxuan Yang, Yupeng Zheng, Deheng Qian, Zebin Xing, Qichao Zhang, Linbo Wang, Yichen Zhang, Shaoyu Guo, Zhongpu Xia, Qiang Chen, Junyu Han, Lingyun Xu, Yifeng Pan, Dongbin Zhao

Abstract

We introduce DreamerAD, the first latent world model framework that enables efficient reinforcement learning for autonomous driving by compressing diffusion sampling from 100 steps to 1 - achieving 80x speedup while maintaining visual interpretability. Training RL policies on real-world driving data incurs prohibitive costs and safety risks. While existing pixel-level diffusion world models enable safe imagination-based training, they suffer from multi-step diffusion inference latency (2s/frame) that prevents high-frequency RL interaction. Our approach leverages denoised latent features from video generation models through three key mechanisms: (1) shortcut forcing that reduces sampling complexity via recursive multi-resolution step compression, (2) an autoregressive dense reward model operating directly on latent representations for fine-grained credit assignment, and (3) Gaussian vocabulary sampling for GRPO that constrains exploration to physically plausible trajectories. DreamerAD achieves 87.7 EPDMS on NavSim v2, establishing state-of-the-art performance and demonstrating that latent-space RL is effective for autonomous driving.

DreamerAD: Efficient Reinforcement Learning via Latent World Model for Autonomous Driving

Abstract

We introduce DreamerAD, the first latent world model framework that enables efficient reinforcement learning for autonomous driving by compressing diffusion sampling from 100 steps to 1 - achieving 80x speedup while maintaining visual interpretability. Training RL policies on real-world driving data incurs prohibitive costs and safety risks. While existing pixel-level diffusion world models enable safe imagination-based training, they suffer from multi-step diffusion inference latency (2s/frame) that prevents high-frequency RL interaction. Our approach leverages denoised latent features from video generation models through three key mechanisms: (1) shortcut forcing that reduces sampling complexity via recursive multi-resolution step compression, (2) an autoregressive dense reward model operating directly on latent representations for fine-grained credit assignment, and (3) Gaussian vocabulary sampling for GRPO that constrains exploration to physically plausible trajectories. DreamerAD achieves 87.7 EPDMS on NavSim v2, establishing state-of-the-art performance and demonstrating that latent-space RL is effective for autonomous driving.

Paper Structure

This paper contains 27 sections, 23 equations, 5 figures, 5 tables.

Figures (5)

  • Figure 1: World model imagination training guided by diverse trajectories. Each row shows a driving scenario where the world model imagines future outcomes for candidate trajectories. RGB sequences display predicted frames with reward model scores (red: collision risk, green: safe). BEV maps (right) visualize trajectories: hazardous paths (left, red-highlighted) versus safe alternatives (right, green-highlighted).
  • Figure 2: PCA visualization of denoised latent features, demonstrating strong spatial and semantic coherence.
  • Figure 3: Overview of the DreamerAD RL training architecture. The RL training pipeline consists of three main stages: 1) Policy Generation and Sampling (yellow): Generates a base policy from historical inputs and samples a set of candidate trajectories based on a predefined vocabulary. 2) RL Training via World Model (green): Performs latent rollouts for the sampled trajectories to imagine future states. Step-wise rewards are decoded from these latent features and aggregated into a time-aware dense reward. Notably, our latent representations can be losslessly decoded into RGB frames for accident analysis or visualization, though decoding is bypassed during training for efficiency. 3) Policy Optimization (blue): Computes group advantages from the dense rewards to optimize the policy network using the GRPO algorithm.
  • Figure 4: Visualization of one-step inference for Epona and Shortcut Forcing World Model.
  • Figure 5: Comparison before and after RL training. The leftmost column displays the front-view camera image at the current timestep. The right columns show the BEV planning results from SFT and RL, respectively. The red trajectory represents the SFT output, while the blue represents the RL output. Red highlights in the SFT BEV maps indicate collisions, whereas green highlights in the RL BEV maps denote safe passage.