Table of Contents
Fetching ...

STRNet: Visual Navigation with Spatio-Temporal Representation through Dynamic Graph Aggregation

Hao Ren, Zetong Bi, Yiming Zeng, Zhaoliang Wan, Lu Qi, Hui Cheng

Abstract

Visual navigation requires the robot to reach a specified goal such as an image, based on a sequence of first-person visual observations. While recent learning-based approaches have made significant progress, they often focus on improving policy heads or decision strategies while relying on simplistic feature encoders and temporal pooling to represent visual input. This leads to the loss of fine-grained spatial and temporal structure, ultimately limiting accurate action prediction and progress estimation. In this paper, we propose a unified spatio-temporal representation framework that enhances visual encoding for robotic navigation. Our approach extracts features from both image sequences and goal observations, and fuses them using the designed spatio-temporal fusion module. This module performs spatial graph reasoning within each frame and models temporal dynamics using a hybrid temporal shift module combined with multi-resolution difference-aware convolution. Experimental results demonstrate that our approach consistently improves navigation performance and offers a generalizable visual backbone for goal-conditioned control. Code is available at \href{https://github.com/hren20/STRNet}{https://github.com/hren20/STRNet}.

STRNet: Visual Navigation with Spatio-Temporal Representation through Dynamic Graph Aggregation

Abstract

Visual navigation requires the robot to reach a specified goal such as an image, based on a sequence of first-person visual observations. While recent learning-based approaches have made significant progress, they often focus on improving policy heads or decision strategies while relying on simplistic feature encoders and temporal pooling to represent visual input. This leads to the loss of fine-grained spatial and temporal structure, ultimately limiting accurate action prediction and progress estimation. In this paper, we propose a unified spatio-temporal representation framework that enhances visual encoding for robotic navigation. Our approach extracts features from both image sequences and goal observations, and fuses them using the designed spatio-temporal fusion module. This module performs spatial graph reasoning within each frame and models temporal dynamics using a hybrid temporal shift module combined with multi-resolution difference-aware convolution. Experimental results demonstrate that our approach consistently improves navigation performance and offers a generalizable visual backbone for goal-conditioned control. Code is available at \href{https://github.com/hren20/STRNet}{https://github.com/hren20/STRNet}.

Paper Structure

This paper contains 18 sections, 18 equations, 6 figures, 6 tables.

Figures (6)

  • Figure 1: t-SNE projections of feature embeddings colored by ground-truth temporal distances. (a) Conventional temporal-context pooling encoder (NoMaD sridhar2024nomad) produces entangled embeddings, mixing near- and far-to-goal states. (b) Proposed STRNet, using graph-based spatial aggregation and hybrid spatio-temporal fusion, yields clearly separated embeddings, effectively capturing spatial and temporal cues.
  • Figure 2: Pipeline of the Proposed Model for Action Prediction: The model processes input observations and goal images through feature extraction, spatial feature aggregation, temporal feature fusion, and hybrid temporal shift, followed by task-specific processing, including temporal distance computation and diffusion denoising to obtain final action prediction.
  • Figure 3: (a) A Grid structure representing a partitioned image, and (b) A Graph structure illustrating the relationships between different regions of the image. The graph structure organizes the context in a more flexible way that aligns with semantic topological relationships, avoiding the local receptive field or predefined order between patches.
  • Figure 4: Qualitative navigation trajectories (blue) produced by STRNet in 2D-3D-S and Citysim Environments.
  • Figure 5: Failure cases of the NoMaD in visual navigation. (a) Suboptimal behavior caused by poor representation. (b) Hesitation due to incorrect understanding. (c) Incorrect motion direction and erratic trajectory. (d) Increased collisions.
  • ...and 1 more figures