Refining 3D Point Cloud Normal Estimation via Sample Selection
Jun Zhou, Yaoshun Li, Hongchen Tan, Mingjie Wang, Nannan Li, Xiuping Liu
TL;DR
The paper addresses the robustness of point cloud normal estimation under noisy and imperfect training data by introducing a two-branch network that fuses local and global context. It adds confidence-based sample weighting to its loss to suppress corrupt patches and adopts an orientation-refinement step using NeuralGF to achieve both accurate and consistently oriented normals. Through extensive experiments on synthetic and real datasets, it reports state-of-the-art performance for unoriented and oriented normals and demonstrates benefits for Poisson reconstruction and denoising. The approach offers practical improvements for downstream 3D reconstruction and analysis, particularly in noisy or varying-density point clouds.
Abstract
In recent years, point cloud normal estimation, as a classical and foundational algorithm, has garnered extensive attention in the field of 3D geometric processing. Despite the remarkable performance achieved by current Neural Network-based methods, their robustness is still influenced by the quality of training data and the models' performance. In this study, we designed a fundamental framework for normal estimation, enhancing existing model through the incorporation of global information and various constraint mechanisms. Additionally, we employed a confidence-based strategy to select the reasonable samples for fair and robust network training. The introduced sample confidence can be integrated into the loss function to balance the influence of different samples on model training. Finally, we utilized existing orientation methods to correct estimated non-oriented normals, achieving state-of-the-art performance in both oriented and non-oriented tasks. Extensive experimental results demonstrate that our method works well on the widely used benchmarks.
