Table of Contents
Fetching ...

Beyond Textual Knowledge-Leveraging Multimodal Knowledge Bases for Enhancing Vision-and-Language Navigation

Dongsheng Yang, Yinfeng Yu, Liejun Wang

Abstract

Vision-and-Language Navigation (VLN) requires an agent to navigate through complex unseen environments based on natural language instructions. However, existing methods often struggle to effectively capture key semantic cues and accurately align them with visual observations. To address this limitation, we propose Beyond Textual Knowledge (BTK), a VLN framework that synergistically integrates environment-specific textual knowledge with generative image knowledge bases. BTK employs Qwen3-4B to extract goal-related phrases and utilizes Flux-Schnell to construct two large-scale image knowledge bases: R2R-GP and REVERIE-GP. Additionally, we leverage BLIP-2 to construct a large-scale textual knowledge base derived from panoramic views, providing environment-specific semantic cues. These multimodal knowledge bases are effectively integrated via the Goal-Aware Augmentor and Knowledge Augmentor, significantly enhancing semantic grounding and cross-modal alignment. Extensive experiments on the R2R dataset with 7,189 trajectories and the REVERIE dataset with 21,702 instructions demonstrate that BTK significantly outperforms existing baselines. On the test unseen splits of R2R and REVERIE, SR increased by 5% and 2.07% respectively, and SPL increased by 4% and 3.69% respectively. The source code is available at https://github.com/yds3/IPM-BTK/.

Beyond Textual Knowledge-Leveraging Multimodal Knowledge Bases for Enhancing Vision-and-Language Navigation

Abstract

Vision-and-Language Navigation (VLN) requires an agent to navigate through complex unseen environments based on natural language instructions. However, existing methods often struggle to effectively capture key semantic cues and accurately align them with visual observations. To address this limitation, we propose Beyond Textual Knowledge (BTK), a VLN framework that synergistically integrates environment-specific textual knowledge with generative image knowledge bases. BTK employs Qwen3-4B to extract goal-related phrases and utilizes Flux-Schnell to construct two large-scale image knowledge bases: R2R-GP and REVERIE-GP. Additionally, we leverage BLIP-2 to construct a large-scale textual knowledge base derived from panoramic views, providing environment-specific semantic cues. These multimodal knowledge bases are effectively integrated via the Goal-Aware Augmentor and Knowledge Augmentor, significantly enhancing semantic grounding and cross-modal alignment. Extensive experiments on the R2R dataset with 7,189 trajectories and the REVERIE dataset with 21,702 instructions demonstrate that BTK significantly outperforms existing baselines. On the test unseen splits of R2R and REVERIE, SR increased by 5% and 2.07% respectively, and SPL increased by 4% and 3.69% respectively. The source code is available at https://github.com/yds3/IPM-BTK/.

Paper Structure

This paper contains 44 sections, 7 equations, 10 figures, 10 tables.

Figures (10)

  • Figure 1: BTK mainly consists of three stages: (1) extracting goal-related phrases using Qwen3-4B and enhancing their semantics through a goal-aware augmentation module; (2) generating visual exemplars to construct the image knowledge base; and (3) retrieving complementary textual knowledge from BLIP-2 to achieve robust cross-modal alignment.
  • Figure 2: Overview of the BTK. The model first extracts sub-goals from the natural language instruction. Subsequently, the Goal-Aware Augmentor utilizes these sub-goals to simultaneously reinforce the instruction's semantics and generate corresponding image knowledge. Concurrently, the model acquires textual knowledge by matching visual information with a textual knowledge base. Finally, the Knowledge Augmentor integrates this multimodal knowledge to jointly enhance the instruction and visual inputs.
  • Figure 3: Architecture of the Goal-Aware Augmentor.
  • Figure 4: Architecture of image knowledge acquisition.
  • Figure 5: Architecture of the Knowledge Augmentor.
  • ...and 5 more figures