Table of Contents
Fetching ...

EgoMind: Activating Spatial Cognition through Linguistic Reasoning in MLLMs

Zhenghao Chen, Huiqun Wang, Di Huang

Abstract

Multimodal large language models (MLLMs) are increasingly being applied to spatial cognition tasks, where they are expected to understand and interact with complex environments. Most existing works improve spatial reasoning by introducing 3D priors or geometric supervision, which enhances performance but incurs substantial data preparation and alignment costs. In contrast, purely 2D approaches often struggle with multi-frame spatial reasoning due to their limited ability to capture cross-frame spatial relationships. To address these limitations, we propose EgoMind, a Chain-of-Thought framework that enables geometry-free spatial reasoning through Role-Play Caption, which jointly constructs a coherent linguistic scene graph across frames, and Progressive Spatial Analysis, which progressively reasons toward task-specific questions. With only 5K auto-generated SFT samples and 20K RL samples, EgoMind achieves competitive results on VSI-Bench, SPAR-Bench, SITE-Bench, and SPBench, demonstrating its effectiveness in strengthening the spatial reasoning capabilities of MLLMs and highlighting the potential of linguistic reasoning for spatial cognition. Code and data are released at https://github.com/Hyggge/EgoMind.

EgoMind: Activating Spatial Cognition through Linguistic Reasoning in MLLMs

Abstract

Multimodal large language models (MLLMs) are increasingly being applied to spatial cognition tasks, where they are expected to understand and interact with complex environments. Most existing works improve spatial reasoning by introducing 3D priors or geometric supervision, which enhances performance but incurs substantial data preparation and alignment costs. In contrast, purely 2D approaches often struggle with multi-frame spatial reasoning due to their limited ability to capture cross-frame spatial relationships. To address these limitations, we propose EgoMind, a Chain-of-Thought framework that enables geometry-free spatial reasoning through Role-Play Caption, which jointly constructs a coherent linguistic scene graph across frames, and Progressive Spatial Analysis, which progressively reasons toward task-specific questions. With only 5K auto-generated SFT samples and 20K RL samples, EgoMind achieves competitive results on VSI-Bench, SPAR-Bench, SITE-Bench, and SPBench, demonstrating its effectiveness in strengthening the spatial reasoning capabilities of MLLMs and highlighting the potential of linguistic reasoning for spatial cognition. Code and data are released at https://github.com/Hyggge/EgoMind.

Paper Structure

This paper contains 26 sections, 10 equations, 7 figures, 5 tables.

Figures (7)

  • Figure 1: Illustration of the differences among spatial reasoning approaches. Direct questioning often fails because of missing cross-frame correlations and limited awareness of implicit objects needed for spatial bridging. Guided questioning helps the model gradually establish these associations. In contrast, EgoMind CoT explicitly models viewpoint transitions and implicit spatial bridges, builds a coherent global scene representation, and reliably produces the correct answer.
  • Figure 2: Illustration of the proposed EgoMind framework. MLLMs powered by EgoMind first generate a Role-Play Caption by producing per-frame scene descriptions and inferring viewpoint transitions. The model then performs Progressive Spatial Analysis (PSA) to identify relevant objects, expand spatial dependencies via implicit spatial bridges, and form a coherent reasoning chain. Finally, the system outputs the EgoMind CoT, unifying RPC and PSA into an interpretable spatial reasoning process.
  • Figure 3: Illustration of the data generation pipeline. Randomly sampled video frames and a tailored instruction are first given to GPT-4o to produce detailed per-frame descriptions. Qwen2.5-72B then infers viewpoint transitions and synthesizes them into the Role-Play Caption (RPC). In parallel, another GPT-4o instance, guided by a structured instruction, extracts the required spatial context from the multi-frame input and question. Finally, GPT-4o merges the RPC and spatial context to generate the final EgoMind Chain-of-Thought.
  • Figure D: A case study of relational reasoning with the Qwen2.5-VL-7B model enhanced by the EgoMind framework.
  • Figure E: Case studies of EgoMind.
  • ...and 2 more figures