Table of Contents
Fetching ...

Physics-Aware Diffusion for LiDAR Point Cloud Densification

Zeping Zhang, Robert Laganière

Abstract

LiDAR perception is severely limited by the distance-dependent sparsity of distant objects. While diffusion models can recover dense geometry, they suffer from prohibitive latency and physical hallucinations manifesting as ghost points. We propose Scanline-Consistent Range-Aware Diffusion, a framework that treats densification as probabilistic refinement rather than generation. By leveraging Partial Diffusion (SDEdit) on a coarse prior, we achieve high-fidelity results in just 156ms. Our novel Ray-Consistency loss and Negative Ray Augmentation enforce sensor physics to suppress artifacts. Our method achieves state-of-the-art results on KITTI-360 and nuScenes, directly boosting off-the-shelf 3D detectors without retraining. Code will be made available.

Physics-Aware Diffusion for LiDAR Point Cloud Densification

Abstract

LiDAR perception is severely limited by the distance-dependent sparsity of distant objects. While diffusion models can recover dense geometry, they suffer from prohibitive latency and physical hallucinations manifesting as ghost points. We propose Scanline-Consistent Range-Aware Diffusion, a framework that treats densification as probabilistic refinement rather than generation. By leveraging Partial Diffusion (SDEdit) on a coarse prior, we achieve high-fidelity results in just 156ms. Our novel Ray-Consistency loss and Negative Ray Augmentation enforce sensor physics to suppress artifacts. Our method achieves state-of-the-art results on KITTI-360 and nuScenes, directly boosting off-the-shelf 3D detectors without retraining. Code will be made available.

Paper Structure

This paper contains 26 sections, 4 equations, 4 figures, 6 tables.

Figures (4)

  • Figure 2: Bleeding Artifacts in LiDAR Densification. An example of a densified result generated by the baseline method LiDPM. Note that without physical constraints, existing methods often generate severe ghost artifacts extending behind objects into free space.
  • Figure 3: The proposed Scanline-Consistent Range-Aware Diffusion framework. (Left) Stage-0 constructs a coarse structural prior. (Middle) Stage-1 performs Partial Diffusion. The Probabilistic Range Head predicts both range noise and an uncertainty scale $\hat{b}$. (Right) Physics-Aware Constraints. We employ an Isotropic Probabilistic Consistency loss. The penalty is weighted by a spatial Gaussian kernel (Lateral Weight) scaled by $\hat{b}$, allowing the uncertainty to dynamically determine the valid geometric tolerance for both lateral misalignment ($d_{\perp}$) and radial occlusion ($d_{\parallel}$).
  • Figure 4: Negative Ray Augmentation Strategy. (Panel A) On positive rays, the model learns a probabilistic range distribution. The uncertainty $\hat{b}$ (visualized as the green confidence region) represents the isotropic spatial tolerance learned by the network. (Panel B) On negative rays sampled from known free space, we enforce a zero-occupancy constraint. This explicitly teaches the Occupancy Head to suppress ghost points in free space where the sensor beam should pass through unimpeded.
  • Figure 5: Qualitative Comparison of Densification Results. LiDPM preserves local detail but introduces scattered outlier points near object edges and occasional ghost points in free space. Our method generates sharp structures with clean free space in the demonstration.