Table of Contents
Fetching ...

ViVa: A Video-Generative Value Model for Robot Reinforcement Learning

Jindi Lv, Hao Li, Jie Li, Yifei Nie, Fankun Kong, Yang Wang, Xiaofeng Wang, Zheng Zhu, Chaojun Ni, Qiuping Deng, Hengtao Li, Jiancheng Lv, Guan Huang

Abstract

Vision-language-action (VLA) models have advanced robot manipulation through large-scale pretraining, but real-world deployment remains challenging due to partial observability and delayed feedback. Reinforcement learning addresses this via value functions, which assess task progress and guide policy improvement. However, existing value models built on vision-language models (VLMs) struggle to capture temporal dynamics, undermining reliable value estimation in long-horizon tasks. In this paper, we propose ViVa, a video-generative value model that repurposes a pretrained video generator for value estimation. Taking the current observation and robot proprioception as input, ViVa jointly predicts future proprioception and a scalar value for the current state. By leveraging the spatiotemporal priors of a pretrained video generator, our approach grounds value estimation in anticipated embodiment dynamics, moving beyond static snapshots to intrinsically couple value with foresight. Integrated into RECAP, ViVa delivers substantial improvements on real-world box assembly. Qualitative analysis across all three tasks confirms that ViVa produces more reliable value signals, accurately reflecting task progress. By leveraging spatiotemporal priors from video corpora, ViVa also generalizes to novel objects, highlighting the promise of video-generative models for value estimation.

ViVa: A Video-Generative Value Model for Robot Reinforcement Learning

Abstract

Vision-language-action (VLA) models have advanced robot manipulation through large-scale pretraining, but real-world deployment remains challenging due to partial observability and delayed feedback. Reinforcement learning addresses this via value functions, which assess task progress and guide policy improvement. However, existing value models built on vision-language models (VLMs) struggle to capture temporal dynamics, undermining reliable value estimation in long-horizon tasks. In this paper, we propose ViVa, a video-generative value model that repurposes a pretrained video generator for value estimation. Taking the current observation and robot proprioception as input, ViVa jointly predicts future proprioception and a scalar value for the current state. By leveraging the spatiotemporal priors of a pretrained video generator, our approach grounds value estimation in anticipated embodiment dynamics, moving beyond static snapshots to intrinsically couple value with foresight. Integrated into RECAP, ViVa delivers substantial improvements on real-world box assembly. Qualitative analysis across all three tasks confirms that ViVa produces more reliable value signals, accurately reflecting task progress. By leveraging spatiotemporal priors from video corpora, ViVa also generalizes to novel objects, highlighting the promise of video-generative models for value estimation.

Paper Structure

This paper contains 30 sections, 8 equations, 10 figures, 2 tables.

Figures (10)

  • Figure 1: Overall architecture of ViVa. Left: Current robot proprioception and scalar value are mapped to latent frames via repeat padding and broadcast operations. Right: The injected latents form a unified sequence in which current observations (blank token, proprioception, and multi-view images) serve as clean conditioning frames, while future proprioception and value are noisy target frames. The diffusion Transformer denoises these targets conditioned on the clean prefix, jointly predicting the future embodied state and a scalar value defined as the normalized return.
  • Figure 2: Illustration of the three real-world tasks. For each task, we show the initial state (left), key intermediate stages (middle), and the final successful state (right).
  • Figure 3: Value estimation during a box-assembling task. The plot compares value estimates from a VLM-based function and our ViVa model over time. Two failure events are highlighted (blue-shaded). The VLM-based value remains largely insensitive to these errors, suggesting overfitting to successful trajectories. In contrast, ViVa exhibits sharp drops precisely when these mistakes occur, demonstrating its sensitivity to suboptimal actions through grounding in anticipated embodiment dynamics.
  • Figure 4: Value estimation during a shirt-folding task. The plot compares value estimates from a VLM-based function and our ViVa model over time. The VLM-based value exhibits erratic drops (orange-shaded). It remains largely flat throughout the episode, failing to reflect the gradual progress toward successful completion. ViVa, by contrast, maintains a stable value progression, accurately reflecting continuous task progress.
  • Figure 5: Value estimation during toilet paper organization. The plot compares value estimates from a VLM-based function and ViVa over time. Two key milestones are highlighted (blue-shaded): roll alignment and label application. ViVa exhibits clear value increases precisely at these milestones, yielding a smooth trajectory that tracks task progress. The VLM-based value, in contrast, remains largely insensitive to these events.
  • ...and 5 more figures