Table of Contents
Fetching ...

Vega: Learning to Drive with Natural Language Instructions

Sicheng Zuo, Yuxuan Li, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu

Abstract

Vision-language-action models have reshaped autonomous driving to incorporate languages into the decision-making process. However, most existing pipelines only utilize the language modality for scene descriptions or reasoning and lack the flexibility to follow diverse user instructions for personalized driving. To address this, we first construct a large-scale driving dataset (InstructScene) containing around 100,000 scenes annotated with diverse driving instructions with the corresponding trajectories. We then propose a unified Vision-Language-World-Action model, Vega, for instruction-based generation and planning. We employ the autoregressive paradigm to process visual inputs (vision) and language instructions (language) and the diffusion paradigm to generate future predictions (world modeling) and trajectories (action). We perform joint attention to enable interactions between the modalities and use individual projection layers for different modalities for more capabilities. Extensive experiments demonstrate that our method not only achieves superior planning performance but also exhibits strong instruction-following abilities, paving the way for more intelligent and personalized driving systems.

Vega: Learning to Drive with Natural Language Instructions

Abstract

Vision-language-action models have reshaped autonomous driving to incorporate languages into the decision-making process. However, most existing pipelines only utilize the language modality for scene descriptions or reasoning and lack the flexibility to follow diverse user instructions for personalized driving. To address this, we first construct a large-scale driving dataset (InstructScene) containing around 100,000 scenes annotated with diverse driving instructions with the corresponding trajectories. We then propose a unified Vision-Language-World-Action model, Vega, for instruction-based generation and planning. We employ the autoregressive paradigm to process visual inputs (vision) and language instructions (language) and the diffusion paradigm to generate future predictions (world modeling) and trajectories (action). We perform joint attention to enable interactions between the modalities and use individual projection layers for different modalities for more capabilities. Extensive experiments demonstrate that our method not only achieves superior planning performance but also exhibits strong instruction-following abilities, paving the way for more intelligent and personalized driving systems.

Paper Structure

This paper contains 16 sections, 6 equations, 6 figures, 4 tables.

Figures (6)

  • Figure 1: Visualizations of our model for instructional driving. We propose a unified vision-language-world-action model, Vega, for instruction-based generation and planning. Vega can predict multiple trajectories in the same scenario following diverse instructions.
  • Figure 2: Overview of our model. Compared to traditional imitation driving models, which can only predict the single expert trajectory, Vega can follow natural language instructions to generate diverse planning trajectories and future image predictions.
  • Figure 3: Framework of our Unified Vision-Language-World-Action Model. We jointly model action planning and image generation using multi-modal inputs and a MoT architecture MoT.
  • Figure 4: Ablation of interleaving image-action sequences. We compare the training losses of models trained on non-interleaving sequences (original) and interleaving sequences of different lengths.
  • Figure 5: Instruction-based planning examples. We visualize the effects of language instructions on action planning with front-view camera images and BEV maps.
  • ...and 1 more figures