Table of Contents
Fetching ...

SpatialPoint: Spatial-aware Point Prediction for Embodied Localization

Qiming Zhu, Zhirui Fang, Tianming Zhang, Chuanxiu Liu, Xiaoke Jiang, Lei Zhang

Abstract

Embodied intelligence fundamentally requires a capability to determine where to act in 3D space. We formalize this requirement as embodied localization -- the problem of predicting executable 3D points conditioned on visual observations and language instructions. We instantiate embodied localization with two complementary target types: touchable points, surface-grounded 3D points enabling direct physical interaction, and air points, free-space 3D points specifying placement and navigation goals, directional constraints, or geometric relations. Embodied localization is inherently a problem of embodied 3D spatial reasoning -- yet most existing vision-language systems rely predominantly on RGB inputs, necessitating implicit geometric reconstruction that limits cross-scene generalization, despite the widespread adoption of RGB-D sensors in robotics. To address this gap, we propose SpatialPoint, a spatial-aware vision-language framework with careful design that integrates structured depth into a vision-language model (VLM) and generates camera-frame 3D coordinates. We construct a 2.6M-sample RGB-D dataset covering both touchable and air points QA pairs for training and evaluation. Extensive experiments demonstrate that incorporating depth into VLMs significantly improves embodied localization performance. We further validate SpatialPoint through real-robot deployment across three representative tasks: language-guided robotic arm grasping at specified locations, object placement to target destinations, and mobile robot navigation to goal positions.

SpatialPoint: Spatial-aware Point Prediction for Embodied Localization

Abstract

Embodied intelligence fundamentally requires a capability to determine where to act in 3D space. We formalize this requirement as embodied localization -- the problem of predicting executable 3D points conditioned on visual observations and language instructions. We instantiate embodied localization with two complementary target types: touchable points, surface-grounded 3D points enabling direct physical interaction, and air points, free-space 3D points specifying placement and navigation goals, directional constraints, or geometric relations. Embodied localization is inherently a problem of embodied 3D spatial reasoning -- yet most existing vision-language systems rely predominantly on RGB inputs, necessitating implicit geometric reconstruction that limits cross-scene generalization, despite the widespread adoption of RGB-D sensors in robotics. To address this gap, we propose SpatialPoint, a spatial-aware vision-language framework with careful design that integrates structured depth into a vision-language model (VLM) and generates camera-frame 3D coordinates. We construct a 2.6M-sample RGB-D dataset covering both touchable and air points QA pairs for training and evaluation. Extensive experiments demonstrate that incorporating depth into VLMs significantly improves embodied localization performance. We further validate SpatialPoint through real-robot deployment across three representative tasks: language-guided robotic arm grasping at specified locations, object placement to target destinations, and mobile robot navigation to goal positions.

Paper Structure

This paper contains 57 sections, 3 equations, 12 figures, 7 tables.

Figures (12)

  • Figure 1: Embodied localization as executable 3D target prediction. We reduce embodied execution to predicting camera-frame 3D points of two complementary types: touchable points grounded on observed surfaces, and air points located in free space and specified by spatial language.
  • Figure 2: Our data engine. Touchable points are converted from RoboAfford tang2025roboafford 2D annotations using monocular depth estimation lin2025da3. Air points are generated from objects' 3D relations, which are computed by lifting DINO-X ren2024dino detections (caption/bbox/mask) into the camera frame using the estimated depth map and intrinsics, and then applying geometric computations.
  • Figure 3: Model overview. We add a dedicated depth encoder by duplicating the original visual backbone and feeding it a three-channel depth map to obtain depth tokens, wrapped by <dpt_start> and <dpt_end>. RGB/depth/text tokens form one causal sequence, and the LM head decodes structured $(u,v,Z)$ point lists.
  • Figure 4: Surface-target qualitative comparison on RoboAfford-Eval(touchable-point).
  • Figure 5: Free-space qualitative comparison on our benchmark. Under the same queries, our model better satisfies air points relation constraints, compared to Qwen3-VL, demonstrating more reliable air points target prediction.
  • ...and 7 more figures