Table of Contents
Fetching ...

Scalable Object Relation Encoding for Better 3D Spatial Reasoning in Large Language Models

Shengli Zhou, Minghang Zheng, Feng Zheng, Yang Liu

Abstract

Spatial reasoning focuses on locating target objects based on spatial relations in 3D scenes, which plays a crucial role in developing intelligent embodied agents. Due to the limited availability of 3D scene-language paired data, it is challenging to train models with strong reasoning ability from scratch. Previous approaches have attempted to inject 3D scene representations into the input space of Large Language Models (LLMs) and leverage the pretrained comprehension and reasoning abilities for spatial reasoning. However, models encoding absolute positions struggle to extract spatial relations from prematurely fused features, while methods explicitly encoding all spatial relations (which is quadratic in the number of objects) as input tokens suffer from poor scalability. To address these limitations, we propose QuatRoPE, a novel positional embedding method with an input length that is linear to the number of objects, and explicitly calculates pairwise spatial relations through the dot product in attention layers. QuatRoPE's holistic vector encoding of 3D coordinates guarantees a high degree of spatial consistency, maintaining fidelity to the scene's geometric integrity. Additionally, we introduce the Isolated Gated RoPE Extension (IGRE), which effectively limits QuatRoPE's influence to object-related tokens, thereby minimizing interference with the LLM's existing positional embeddings and maintaining the LLM's original capabilities. Extensive experiments demonstrate the effectiveness of our approaches. The code and data are available at https://github.com/oceanflowlab/QuatRoPE.

Scalable Object Relation Encoding for Better 3D Spatial Reasoning in Large Language Models

Abstract

Spatial reasoning focuses on locating target objects based on spatial relations in 3D scenes, which plays a crucial role in developing intelligent embodied agents. Due to the limited availability of 3D scene-language paired data, it is challenging to train models with strong reasoning ability from scratch. Previous approaches have attempted to inject 3D scene representations into the input space of Large Language Models (LLMs) and leverage the pretrained comprehension and reasoning abilities for spatial reasoning. However, models encoding absolute positions struggle to extract spatial relations from prematurely fused features, while methods explicitly encoding all spatial relations (which is quadratic in the number of objects) as input tokens suffer from poor scalability. To address these limitations, we propose QuatRoPE, a novel positional embedding method with an input length that is linear to the number of objects, and explicitly calculates pairwise spatial relations through the dot product in attention layers. QuatRoPE's holistic vector encoding of 3D coordinates guarantees a high degree of spatial consistency, maintaining fidelity to the scene's geometric integrity. Additionally, we introduce the Isolated Gated RoPE Extension (IGRE), which effectively limits QuatRoPE's influence to object-related tokens, thereby minimizing interference with the LLM's existing positional embeddings and maintaining the LLM's original capabilities. Extensive experiments demonstrate the effectiveness of our approaches. The code and data are available at https://github.com/oceanflowlab/QuatRoPE.

Paper Structure

This paper contains 23 sections, 31 equations, 3 figures, 10 tables.

Figures (3)

  • Figure 1: (a) In QuatRoPE, we embed the absolute 3D position of each object to the corresponding token, thus limiting the input length linear to object count. (b) By leveraging a dedicated rotation scheme, when tokens with 3D position embedding interact in the attention layer of the LLM, the absolute coordinates are transformed into pairwise relative positions, empowering spatial reasoning. (c) In previous methods, as the positions are decoupled into individual coordinates, when the coordinate of some axis is close (as marked in green), the attention score is incorrectly inflated. (d) QuatRoPE encodes positions as holistic vectors, correctly representing spatial relations.
  • Figure 2: An illustration of ASR's construction pipeline.
  • Figure 3: Qualitative results on the ScanRefer dataset. Target objects are correctly grounded by QuatRoPE (green), whereas the baseline Chat-Scene produces incorrect predictions (red).