Table of Contents
Fetching ...

Stop Wandering: Efficient Vision-Language Navigation via Metacognitive Reasoning

Xueying Li, Feng Lyu, Hao Wu, Mingliu Liu, Jia-Nan Liu, Guozi Liu

Abstract

Training-free Vision-Language Navigation (VLN) agents powered by foundation models can follow instructions and explore 3D environments. However, existing approaches rely on greedy frontier selection and passive spatial memory, leading to inefficient behaviors such as local oscillation and redundant revisiting. We argue that this stems from a lack of metacognitive capabilities: the agent cannot monitor its exploration progress, diagnose strategy failures, or adapt accordingly. To address this, we propose MetaNav, a metacognitive navigation agent integrating spatial memory, history-aware planning, and reflective correction. Spatial memory builds a persistent 3D semantic map. History-aware planning penalizes revisiting to improve efficiency. Reflective correction detects stagnation and uses an LLM to generate corrective rules that guide future frontier selection. Experiments on GOAT-Bench, HM3D-OVON, and A-EQA show that MetaNav achieves state-of-the-art performance while reducing VLM queries by 20.7%, demonstrating that metacognitive reasoning significantly improves robustness and efficiency.

Stop Wandering: Efficient Vision-Language Navigation via Metacognitive Reasoning

Abstract

Training-free Vision-Language Navigation (VLN) agents powered by foundation models can follow instructions and explore 3D environments. However, existing approaches rely on greedy frontier selection and passive spatial memory, leading to inefficient behaviors such as local oscillation and redundant revisiting. We argue that this stems from a lack of metacognitive capabilities: the agent cannot monitor its exploration progress, diagnose strategy failures, or adapt accordingly. To address this, we propose MetaNav, a metacognitive navigation agent integrating spatial memory, history-aware planning, and reflective correction. Spatial memory builds a persistent 3D semantic map. History-aware planning penalizes revisiting to improve efficiency. Reflective correction detects stagnation and uses an LLM to generate corrective rules that guide future frontier selection. Experiments on GOAT-Bench, HM3D-OVON, and A-EQA show that MetaNav achieves state-of-the-art performance while reducing VLM queries by 20.7%, demonstrating that metacognitive reasoning significantly improves robustness and efficiency.

Paper Structure

This paper contains 14 sections, 5 equations, 6 figures, 6 tables, 1 algorithm.

Figures (6)

  • Figure 1: Qualitative trajectory comparison. Baselines suffer from local oscillation or failure when trapped by spatial ambiguities; MetaNav leverages episodic reflection to break deadlocks and generate efficient paths.
  • Figure 2: System overview of MetaNav. Spatial Memory Construction (D1) builds a persistent 3D semantic map and extracts frontiers from RGB-D input. History-Aware Heuristic Planning (D2) selects a frontier via a utility function combining semantic relevance, geometric cost, and episodic penalty, then executes it for a fixed replanning interval. Reflection and Correction (D3) maintains episodic memory, detects stagnation via information gain, and invokes an LLM to inject corrective rules into D2. Arrows indicate data flow; dashed arrows denote LLM/VLM queries.
  • Figure 3: Performance across instruction modalities on GOAT-Bench.
  • Figure 4: Effect of replanning interval on GOAT-Bench and HM3D-OVON.
  • Figure 5: Effect of short-term memory capacity $K$ on GOAT-Bench.
  • ...and 1 more figures