Table of Contents
Fetching ...

ContraMap: Contrastive Uncertainty Mapping for Robot Environment Representation

Chi Cuong Le, Weiming Zhi

Abstract

Reliable robot perception requires not only predicting scene structure, but also identifying where predictions should be treated as unreliable due to sparse or missing observations. We present ContraMap, a contrastive continuous mapping method that augments kernel-based discriminative maps with an explicit uncertainty class trained using synthetic noise samples. This formulation treats unobserved regions as a contrastive class, enabling joint environment prediction and spatial uncertainty estimation in real time without Bayesian inference. Under a simple mixture-model view, we show that the probability assigned to the uncertainty class is a monotonic function of a distance-aware uncertainty surrogate. Experiments in 2D occupancy mapping, 3D semantic mapping, and tabletop scene reconstruction show that ContraMap preserves mapping quality, produces spatially coherent uncertainty estimates, and is substantially more efficient than Bayesian kernelmap baselines.

ContraMap: Contrastive Uncertainty Mapping for Robot Environment Representation

Abstract

Reliable robot perception requires not only predicting scene structure, but also identifying where predictions should be treated as unreliable due to sparse or missing observations. We present ContraMap, a contrastive continuous mapping method that augments kernel-based discriminative maps with an explicit uncertainty class trained using synthetic noise samples. This formulation treats unobserved regions as a contrastive class, enabling joint environment prediction and spatial uncertainty estimation in real time without Bayesian inference. Under a simple mixture-model view, we show that the probability assigned to the uncertainty class is a monotonic function of a distance-aware uncertainty surrogate. Experiments in 2D occupancy mapping, 3D semantic mapping, and tabletop scene reconstruction show that ContraMap preserves mapping quality, produces spatially coherent uncertainty estimates, and is substantially more efficient than Bayesian kernelmap baselines.

Paper Structure

This paper contains 18 sections, 12 equations, 9 figures, 4 tables.

Figures (9)

  • Figure 1: For robust robot operation, scene representations should provide not only spatially consistent mapping but also a measure of uncertainty across the environment. ContraMap augments continuous classification-based mapping with an additional uncertainty output, enabling joint environment representation and direct uncertainty prediction at any queried location. The model reconstructs scene structure while assigning high uncertainty to occluded or weakly observed regions, such as the space behind the table.
  • Figure 2: Predictive uncertainty for a Gaussian Process (GP) and neural-network baselines on three toy datasets. Orange/Magenta points are in-distribution training samples, and Red points are out-of-distribution samples. Background shading indicates relative uncertainty (brighter = higher, darker = lower). Our method best matches the GP “gold standard”: it stays confident near observed data and becomes uncertain as inputs move away from the training distribution.
  • Figure 3: Overview of ContraMap. Our method employs a softmax classifier network with an additional output node to jointly represent the environment and estimate uncertainty. Observed data (e.g., LiDAR scan or segmented point cloud) are augmented with negative samples and noise, where the noise is labeled as an additional $(C+1)$-th or "uncertain" class. The augmented data are then projected into a feature vector space using a set of reference points, forming the training dataset $\mathbf{D}_{\mathrm{train}}$. The model is trained on this dataset using a first-order optimization method. Once trained, we can query the model at arbitrary locations in the environment to obtain either environment mapping results or uncertainty estimates via the additional "uncertain" node.
  • Figure 4: Occupancy mapping results of each method on Intel dataset.
  • Figure 5: Results extracted from each output of the softmax model.
  • ...and 4 more figures