Table of Contents
Fetching ...

Latent Bridge: Feature Delta Prediction for Efficient Dual-System Vision-Language-Action Model Inference

Yudong Liu, Yuan Li, Zijia Tang, Yuxi Zheng, Yueqian Lin, Qinsi Wang, Yi Li, Shuangjun Liu, Shuai Zhang, Taotao Jing, Dashan Gao, Ning Bi, Jingwei Sun, Yiran Chen, Hai Li

Abstract

Dual-system Vision-Language-Action (VLA) models achieve state-of-the-art robotic manipulation but are bottlenecked by the VLM backbone, which must execute at every control step while producing temporally redundant features. We propose Latent Bridge, a lightweight model that predicts VLM output deltas between timesteps, enabling the action head to operate on predicted outputs while the expensive VLM backbone is called only periodically. We instantiate Latent Bridge on two architecturally distinct VLAs: GR00T-N1.6 (feature-space bridge) and π0.5 (KV-cache bridge), demonstrating that the approach generalizes across VLA designs. Our task-agnostic DAgger training pipeline transfers across benchmarks without modification. Across four LIBERO suites, 24 RoboCasa kitchen tasks, and the ALOHA sim transfer-cube task, Latent Bridge achieves 95-100% performance retention while reducing VLM calls by 50-75%, yielding 1.65-1.73x net per-episode speedup.

Latent Bridge: Feature Delta Prediction for Efficient Dual-System Vision-Language-Action Model Inference

Abstract

Dual-system Vision-Language-Action (VLA) models achieve state-of-the-art robotic manipulation but are bottlenecked by the VLM backbone, which must execute at every control step while producing temporally redundant features. We propose Latent Bridge, a lightweight model that predicts VLM output deltas between timesteps, enabling the action head to operate on predicted outputs while the expensive VLM backbone is called only periodically. We instantiate Latent Bridge on two architecturally distinct VLAs: GR00T-N1.6 (feature-space bridge) and π0.5 (KV-cache bridge), demonstrating that the approach generalizes across VLA designs. Our task-agnostic DAgger training pipeline transfers across benchmarks without modification. Across four LIBERO suites, 24 RoboCasa kitchen tasks, and the ALOHA sim transfer-cube task, Latent Bridge achieves 95-100% performance retention while reducing VLM calls by 50-75%, yielding 1.65-1.73x net per-episode speedup.

Paper Structure

This paper contains 70 sections, 7 equations, 12 figures, 13 tables.

Figures (12)

  • Figure 1: Latent Bridge reduces VLM backbone calls by predicting feature deltas between timesteps. The bridge operates at orders-of-magnitude lower latency, enabling 50--75% VLM savings with 95--100% task performance retention across two VLA architectures and diverse benchmarks.
  • Figure 2: Architecture comparison. Both variants use a DiT backbone with AdaLN conditioning. GR00T operates on a single feature vector; $\pi_{0.5}$ operates on per-layer KV pairs. Zero-initialized output ensures the bridge starts at the copy baseline.
  • Figure 3: Task-agnostic three-stage pipeline. The same pipeline transfers across all LIBERO suites, RoboCasa, and ALOHA sim without modification. DAgger closes the distribution gap between sync training and bridge deployment.
  • Figure 3: Bridge ablations on $\pi_{0.5}$ (all values are success rate in %, mean across 3 seeds). Vision and stable context both matter with task complexity; 19M matches 148M.
  • Figure 4: VLM call period vs. performance on $\pi_{0.5}$. Spatial/Object/Goal stay above 95% SR up to $f{=}8$; LIBERO-10 degrades earlier due to long-horizon error compounding.
  • ...and 7 more figures