Table of Contents
Fetching ...

M2H-MX: Multi-Task Dense Visual Perception for Real-Time Monocular Spatial Understanding

U. V. B. L. Udugama, George Vosselman, Francesco Nex

Abstract

Monocular cameras are attractive for robotic perception due to their low cost and ease of deployment, yet achieving reliable real-time spatial understanding from a single image stream remains challenging. While recent multi-task dense prediction models have improved per-pixel depth and semantic estimation, translating these advances into stable monocular mapping systems is still non-trivial. This paper presents M2H-MX, a real-time multi-task perception model for monocular spatial understanding. The model preserves multi-scale feature representations while introducing register-gated global context and controlled cross-task interaction in a lightweight decoder, enabling depth and semantic predictions to reinforce each other under strict latency constraints. Its outputs integrate directly into an unmodified monocular SLAM pipeline through a compact perception-to-mapping interface. We evaluate both dense prediction accuracy and in-the-loop system performance. On NYUDv2, M2H-MX-L achieves state-of-the-art results, improving semantic mIoU by 6.6% and reducing depth RMSE by 9.4% over representative multi-task baselines. When deployed in a real-time monocular mapping system on ScanNet, M2H-MX reduces average trajectory error by 60.7% compared to a strong monocular SLAM baseline while producing cleaner metric-semantic maps. These results demonstrate that modern multi-task dense prediction can be reliably deployed for real-time monocular spatial perception in robotic systems.

M2H-MX: Multi-Task Dense Visual Perception for Real-Time Monocular Spatial Understanding

Abstract

Monocular cameras are attractive for robotic perception due to their low cost and ease of deployment, yet achieving reliable real-time spatial understanding from a single image stream remains challenging. While recent multi-task dense prediction models have improved per-pixel depth and semantic estimation, translating these advances into stable monocular mapping systems is still non-trivial. This paper presents M2H-MX, a real-time multi-task perception model for monocular spatial understanding. The model preserves multi-scale feature representations while introducing register-gated global context and controlled cross-task interaction in a lightweight decoder, enabling depth and semantic predictions to reinforce each other under strict latency constraints. Its outputs integrate directly into an unmodified monocular SLAM pipeline through a compact perception-to-mapping interface. We evaluate both dense prediction accuracy and in-the-loop system performance. On NYUDv2, M2H-MX-L achieves state-of-the-art results, improving semantic mIoU by 6.6% and reducing depth RMSE by 9.4% over representative multi-task baselines. When deployed in a real-time monocular mapping system on ScanNet, M2H-MX reduces average trajectory error by 60.7% compared to a strong monocular SLAM baseline while producing cleaner metric-semantic maps. These results demonstrate that modern multi-task dense prediction can be reliably deployed for real-time monocular spatial perception in robotic systems.

Paper Structure

This paper contains 20 sections, 22 equations, 5 figures, 5 tables.

Figures (5)

  • Figure 1: Qualitative monocular mapping comparison on ScanNet scene0000_00. Compared with DROID-SLAM and Go-SLAM, integrating M2H-MX produces cleaner geometry and more consistent semantic structure in the downstream map.
  • Figure 2: Overview of the M2H-MX architecture. A monocular RGB image is processed by a DINOv3 backbone with LoRA adaptation applied to the final transformer blocks. Backbone features are reassembled by token reassembly (TR) and organized into a multi-scale pyramid via explicit spatial resampling. At each pyramid level, a Register-Gated Mamba (RGM) block injects global scene context from backbone register tokens while performing efficient long-range feature propagation. Task Adaptors (TA) generate task-specific features at each scale, which are fused through a Cross-Task Mixer (CTM) to enable controlled exchange between related tasks. Multi-Scale Convolutional Attention (MSCA) then refines the fused representations using depthwise spatial attention. Lightweight task heads produce dense predictions for depth, semantics, and optional normals and edges.
  • Figure 3: Register-Gated Mamba (RGM) block used at each decoder scale. A global register vector generates a channel-wise gate $g$ through a Linear+Sigmoid projection, which modulates reshaped feature tokens. The gated features are then processed by Layer Normalization (LN) followed by a Mamba block and a feed-forward network (FFN), each applied with residual connections.
  • Figure 4: Combined module visualization: (a) Cross-Task Mixing (CTM) for gated cross-task feature injection, and (b) Multi-Scale Convolutional Attention (MSCA) for residual refinement.
  • Figure 5: System overview showing M2H-MX deployed as a perception front-end to a fixed monocular SLAM pipeline: Mono-Hydra. M2H-MX runs on the GPU and predicts dense depth and semantic labels from monocular RGB input. These outputs are consumed by an RGB-D inertial odometry module and a Mono-Hydra-based mapping backend running on the CPU.