Table of Contents
Fetching ...

Accurate Surface and Reflectance Modelling from 3D Radar Data with Neural Radiance Fields

Judith Treffler, Vladimír Kubelka, Henrik Andreasson, Martin Magnusson

Abstract

Robust scene representation is essential for autonomous systems to safely operate in challenging low-visibility environments. Radar has a clear advantage over cameras and lidars in these conditions due to its resilience to environmental factors such as fog, smoke, or dust. However, radar data is inherently sparse and noisy, making reliable 3D surface reconstruction challenging. To address these challenges, we propose a neural implicit approach for 3D mapping from radar point clouds, which jointly models scene geometry and view-dependent radar intensities. Our method leverages a memory-efficient hybrid feature encoding to learn a continuous Signed Distance Field (SDF) for surface reconstruction, while also capturing radar-specific reflective properties. We show that our approach produces smoother, more accurate 3D surface reconstructions compared to existing lidar-based reconstruction methods applied to radar data, and can reconstruct view-dependent radar intensities. We also show that in general, as input point clouds get sparser, neural implicit representations render more faithful surfaces, compared to traditional explicit SDFs and meshing techniques.

Accurate Surface and Reflectance Modelling from 3D Radar Data with Neural Radiance Fields

Abstract

Robust scene representation is essential for autonomous systems to safely operate in challenging low-visibility environments. Radar has a clear advantage over cameras and lidars in these conditions due to its resilience to environmental factors such as fog, smoke, or dust. However, radar data is inherently sparse and noisy, making reliable 3D surface reconstruction challenging. To address these challenges, we propose a neural implicit approach for 3D mapping from radar point clouds, which jointly models scene geometry and view-dependent radar intensities. Our method leverages a memory-efficient hybrid feature encoding to learn a continuous Signed Distance Field (SDF) for surface reconstruction, while also capturing radar-specific reflective properties. We show that our approach produces smoother, more accurate 3D surface reconstructions compared to existing lidar-based reconstruction methods applied to radar data, and can reconstruct view-dependent radar intensities. We also show that in general, as input point clouds get sparser, neural implicit representations render more faithful surfaces, compared to traditional explicit SDFs and meshing techniques.

Paper Structure

This paper contains 23 sections, 1 equation, 7 figures, 3 tables.

Figures (7)

  • Figure 1: Accurate surface reconstruction (right) produced by 3QFPI from a set of 3D radar point clouds (left) from the Radar Forest dataset. The mesh is coloured according to surface normals.
  • Figure 2: Network Architecture: Given a 3D point $\textbf{x}$, we concatenate its tri-quadtree feature and Fourier feature positional encoding and pass them to the SDF network. The SDF network predicts an SDF value and, optionally, a learned geometry feature and/or approximated SDF normals. These outputs, along with the spherical harmonics-encoded viewing direction and the Fourier feature encoding of $\textbf{x}$, are concatenated and fed into the intensity network to predict the intensity value for $\textbf{x}$.
  • Figure 3: Surface reconstruction quality of different methods on the Radar Forest dataset (\ref{['fig:forest-rgb']}--\ref{['fig:mesh-forest-3qfp']}), and a corner of the SNAIL-Radar basketball court dataset showing a basket and a building in the background (\ref{['fig:snail-rgb']}--\ref{['fig:mesh-snail-3qfp']}). For reference, we include an image of the scene (from a different angle) and the lidar-based reconstruction created with SHINE-Mapping; meshes are coloured by surface normals. The comparison indicates that 3QFPI produces more accurate and smoother locally planar surfaces from noisy data. In particular, the reconstruction of the building in (\ref{['fig:mesh-snail-3qfp']}) is closest to the lidar reference (\ref{['fig:mesh-snail-lidar']}).
  • Figure 4: Outlines of the histograms of angles between adjacent mesh triangles for surface reconstructions from the Radar Forest and SNAIL-Radar datasets. For both datasets, 3QFPI produces the largest proportion of small angles, indicating better preservation of locally planar regions, such as at the ground or walls.
  • Figure 5: Comparison of completion ratios using every $n^{\mathrm{th}}$ point cloud from the Radar Forest dataset as input. Except for Poisson, the completion of classical meshing methods declines quickly with sparse input, whereas neural implicit methods remain more robust.
  • ...and 2 more figures