Table of Contents
Fetching ...

Learning Multi-View Spatial Reasoning from Cross-View Relations

Suchae Jeong, Jaehwi Song, Haeone Lee, Hanna Kim, Jian Kim, Dongjun Lee, Dong Kyu Shin, Changyeon Kim, Dongyoon Hahm, Woogyeol Jin, Juheon Choi, Kimin Lee

Abstract

Vision-language models (VLMs) have achieved impressive results on single-view vision tasks, but lack the multi-view spatial reasoning capabilities essential for embodied AI systems to understand 3D environments and manipulate objects across different viewpoints. In this work, we introduce Cross-View Relations (XVR), a large-scale dataset designed to teach VLMs spatial reasoning across multiple views. XVR comprises 100K vision-question-answer samples derived from 18K diverse 3D scenes and 70K robotic manipulation trajectories, spanning three fundamental spatial reasoning tasks: Correspondence (matching objects across views), Verification (validating spatial relationships), and Localization (identifying object positions). VLMs fine-tuned on XVR achieve substantial improvements on established multi-view and robotic spatial reasoning benchmarks (MindCube and RoboSpatial). When integrated as backbones in Vision-Language-Action models, XVR-trained representations improve success rates on RoboCasa. Our results demonstrate that explicit training on cross-view spatial relations significantly enhances multi-view reasoning and transfers effectively to real-world robotic manipulation.

Learning Multi-View Spatial Reasoning from Cross-View Relations

Abstract

Vision-language models (VLMs) have achieved impressive results on single-view vision tasks, but lack the multi-view spatial reasoning capabilities essential for embodied AI systems to understand 3D environments and manipulate objects across different viewpoints. In this work, we introduce Cross-View Relations (XVR), a large-scale dataset designed to teach VLMs spatial reasoning across multiple views. XVR comprises 100K vision-question-answer samples derived from 18K diverse 3D scenes and 70K robotic manipulation trajectories, spanning three fundamental spatial reasoning tasks: Correspondence (matching objects across views), Verification (validating spatial relationships), and Localization (identifying object positions). VLMs fine-tuned on XVR achieve substantial improvements on established multi-view and robotic spatial reasoning benchmarks (MindCube and RoboSpatial). When integrated as backbones in Vision-Language-Action models, XVR-trained representations improve success rates on RoboCasa. Our results demonstrate that explicit training on cross-view spatial relations significantly enhances multi-view reasoning and transfers effectively to real-world robotic manipulation.

Paper Structure

This paper contains 84 sections, 11 equations, 17 figures, 7 tables.

Figures (17)

  • Figure 1: Overview of the question–answer (QA) structure in XVR. The figure shows representative examples from eight task types across correspondence, verification, and localization categories, demonstrating the consistent QA format used throughout the dataset. Each category is color-coded: red for Correspondence (Point, Directional), green for Verification (Spatial, Temporal), and blue for Localization (Viewpoint, Directional View, Cross-Scenario, Language-Conditioned).
  • Figure 2: Generalization to external spatial benchmarks (MindCube-Tiny and RoboSpatial-Home). Training on XVR improves Qwen3-VL-2B across all tasks, with the largest gains in Compatibility (+7.6%) and Among (+7.0%).
  • Figure 3: Visualization of the three manipulation tasks and their camera-view configurations used for VLA transfer evaluation.
  • Figure 4: Transfer to Embodied Tasks: RoboCasa VLA Performance. Fine-tuning on XVR improves Qwen3-VL-2B performance on RoboCasa manipulation tasks, showing effective transfer of spatial reasoning skills to robotic action prediction.
  • Figure 5: Task generation pipeline for XVR. The pipeline branches into geometry-based generation (top) for tasks using 3D geometric information and metadata-based generation (bottom) for tasks using trajectory annotations. Geometry-based generation processes general domain data ($\mathcal{I}, \mathcal{P}, \mathcal{X}$) through 3D-to-2D projection and visibility checking to create Point, Directional, Spatial, and Viewpoint tasks. Metadata-based generation processes robotic domain data ($\mathcal{I}, \mathcal{T}, \mathcal{M}$) through temporal and camera metadata extraction to create Temporal, Cross-View, Directional View, and Language-Conditioned tasks. Both pipelines converge at QA assembly to produce final question-answer pairs.
  • ...and 12 more figures