Table of Contents
Fetching ...

VideoWeaver: Multimodal Multi-View Video-to-Video Transfer for Embodied Agents

George Eskandar, Fengyi Shen, Mohammad Altillawi, Dong Chen, Yang Bai, Liudi Yang, Ziyuan Liu

Abstract

Recent progress in video-to-video (V2V) translation has enabled realistic resimulation of embodied AI demonstrations, a capability that allows pretrained robot policies to be transferable to new environments without additional data collection. However, prior works can only operate on a single view at a time, while embodied AI tasks are commonly captured from multiple synchronized cameras to support policy learning. Naively applying single-view models independently to each camera leads to inconsistent appearance across views, and standard transformer architectures do not scale to multi-view settings due to the quadratic cost of cross-view attention. We present VideoWeaver, the first multimodal multi-view V2V translation framework. VideoWeaver is initially trained as a single-view flow-based V2V model. To achieve an extension to the multi-view regime, we propose to ground all views in a shared 4D latent space derived from a feed-forward spatial foundation model, namely, Pi3. This encourages view-consistent appearance even under wide baselines and dynamic camera motion. To scale beyond a fixed number of cameras, we train views at distinct diffusion timesteps, enabling the model to learn both joint and conditional view distributions. This in turn allows autoregressive synthesis of new viewpoints conditioned on existing ones. Experiments show superior or similar performance to the state-of-the-art on the single-view translation benchmarks and, for the first time, physically and stylistically consistent multi-view translations, including challenging egocentric and heterogeneous-camera setups central to world randomization for robot learning.

VideoWeaver: Multimodal Multi-View Video-to-Video Transfer for Embodied Agents

Abstract

Recent progress in video-to-video (V2V) translation has enabled realistic resimulation of embodied AI demonstrations, a capability that allows pretrained robot policies to be transferable to new environments without additional data collection. However, prior works can only operate on a single view at a time, while embodied AI tasks are commonly captured from multiple synchronized cameras to support policy learning. Naively applying single-view models independently to each camera leads to inconsistent appearance across views, and standard transformer architectures do not scale to multi-view settings due to the quadratic cost of cross-view attention. We present VideoWeaver, the first multimodal multi-view V2V translation framework. VideoWeaver is initially trained as a single-view flow-based V2V model. To achieve an extension to the multi-view regime, we propose to ground all views in a shared 4D latent space derived from a feed-forward spatial foundation model, namely, Pi3. This encourages view-consistent appearance even under wide baselines and dynamic camera motion. To scale beyond a fixed number of cameras, we train views at distinct diffusion timesteps, enabling the model to learn both joint and conditional view distributions. This in turn allows autoregressive synthesis of new viewpoints conditioned on existing ones. Experiments show superior or similar performance to the state-of-the-art on the single-view translation benchmarks and, for the first time, physically and stylistically consistent multi-view translations, including challenging egocentric and heterogeneous-camera setups central to world randomization for robot learning.

Paper Structure

This paper contains 19 sections, 5 equations, 9 figures, 3 tables.

Figures (9)

  • Figure 1: We introduce VideoWeaver, the first flow model to synchronously translate multiple viewpoint cameras to a new style. VideoWeaver is a multimodal (depth + sketch) multi-view V2V model that can scale to a large number of views. Our insight is to unify the latent space across views by injecting the coordinates $(x, y, z)$ of a 4D pointcloud (uncolored) inside the flow model.
  • Figure 2: Architecture. VideoWeaver is a DiT with factorized 4D joint attention blocks, and a Mixture-of-Experts module for multimodal conditioning. Moreover, a points encoder conditions the network on a 4D pointcloud estimated from Pi3 pi3.
  • Figure 3: Our insight is to use the output pixelwise pointcloud prediction of Pi3 pi3 as a correspondence map that unifies the latent space across different views at the input of the DiT. For visualization purposes only, every point $(x, y, z)$ is color-coded to showcase the established spatial correspondences.
  • Figure 4: Comparison with state-of-the-art V2V models on single viewpoints from Droid droid and Bridge bridge
  • Figure 5: Ablation study of the multi-view model on on Agibot dataset agibot.
  • ...and 4 more figures