Table of Contents
Fetching ...

AutoWorld: Scaling Multi-Agent Traffic Simulation with Self-Supervised World Models

Mozhgan Pourkeshavatz, Tianran Liu, Nicholas Rhinehart

Abstract

Multi-agent traffic simulation is central to developing and testing autonomous driving systems. Recent data-driven simulators have achieved promising results, but rely heavily on supervised learning from labeled trajectories or semantic annotations, making it costly to scale their performance. Meanwhile, large amounts of unlabeled sensor data can be collected at scale but remain largely unused by existing traffic simulation frameworks. This raises a key question: How can a method harness unlabeled data to improve traffic simulation performance? In this work, we propose AutoWorld, a traffic simulation framework that employs a world model learned from unlabeled occupancy representations of LiDAR data. Given world model samples, AutoWorld constructs a coarse-to-fine predictive scene context as input to a multi-agent motion generation model. To promote sample diversity, AutoWorld uses a cascaded Determinantal Point Process framework to guide the sampling processes of both the world model and the motion model. Furthermore, we designed a motion-aware latent supervision objective that enhances AutoWorld's representation of scene dynamics. Experiments on the WOSAC benchmark show that AutoWorld ranks first on the leaderboard according to the primary Realism Meta Metric (RMM). We further show that simulation performance consistently improves with the inclusion of unlabeled LiDAR data, and study the efficacy of each component with ablations. Our method paves the way for scaling traffic simulation realism without additional labeling. Our project page contains additional visualizations and released code.

AutoWorld: Scaling Multi-Agent Traffic Simulation with Self-Supervised World Models

Abstract

Multi-agent traffic simulation is central to developing and testing autonomous driving systems. Recent data-driven simulators have achieved promising results, but rely heavily on supervised learning from labeled trajectories or semantic annotations, making it costly to scale their performance. Meanwhile, large amounts of unlabeled sensor data can be collected at scale but remain largely unused by existing traffic simulation frameworks. This raises a key question: How can a method harness unlabeled data to improve traffic simulation performance? In this work, we propose AutoWorld, a traffic simulation framework that employs a world model learned from unlabeled occupancy representations of LiDAR data. Given world model samples, AutoWorld constructs a coarse-to-fine predictive scene context as input to a multi-agent motion generation model. To promote sample diversity, AutoWorld uses a cascaded Determinantal Point Process framework to guide the sampling processes of both the world model and the motion model. Furthermore, we designed a motion-aware latent supervision objective that enhances AutoWorld's representation of scene dynamics. Experiments on the WOSAC benchmark show that AutoWorld ranks first on the leaderboard according to the primary Realism Meta Metric (RMM). We further show that simulation performance consistently improves with the inclusion of unlabeled LiDAR data, and study the efficacy of each component with ablations. Our method paves the way for scaling traffic simulation realism without additional labeling. Our project page contains additional visualizations and released code.

Paper Structure

This paper contains 19 sections, 8 equations, 6 figures, 4 tables, 2 algorithms.

Figures (6)

  • Figure 1: Comparison between existing traffic simulation approaches and AutoWorld. Existing methods (right) rely solely on labeled trajectory data, whereas AutoWorld (left) leverages unlabeled LiDAR to learn future scene occupancies that guide behavior generation.
  • Figure 2: Overview of AutoWorld. A LiDAR-based world model is first trained on unlabeled sequences to learn latent scene dynamics. At simulation time, the trained model predicts future occupancies from the observed LiDAR history, which condition the motion generation module. At inference, we sample from both the world model and the motion generator using a training-free cascaded latent diversity strategy.
  • Figure 3: Simulation rollouts generated by AutoWorld at 0, 2.6, 3.7, 5.5, and 8 seconds (left to right). The purple box highlights the interaction of two agents over time.
  • Figure 4: RMM vs. amount of unlabeled data in world-model training: Adding unlabeled LiDAR sequences enhances AutoWorld's simulation realism. Baselines without a world model can't use unlabeled data. In the full AutoWorld setting, the baseline employs motion diversity, while it uses IID sampling in the IID/IID setting.
  • Figure 5: Qualitative analysis of multimodal behavior coverage. The ground-truth ego trajectory, candidate SDC paths from WOMD, and 32 rollouts generated by AutoWorld are visualized (left to right). The model produces diverse behaviors within a single scenario, including alternative turning directions and lane changes, with diversity that also reflects different kinematic profiles.
  • ...and 1 more figures