Table of Contents
Fetching ...

Refining 3D Point Cloud Normal Estimation via Sample Selection

Jun Zhou, Yaoshun Li, Hongchen Tan, Mingjie Wang, Nannan Li, Xiuping Liu

TL;DR

The paper addresses the robustness of point cloud normal estimation under noisy and imperfect training data by introducing a two-branch network that fuses local and global context. It adds confidence-based sample weighting to its loss to suppress corrupt patches and adopts an orientation-refinement step using NeuralGF to achieve both accurate and consistently oriented normals. Through extensive experiments on synthetic and real datasets, it reports state-of-the-art performance for unoriented and oriented normals and demonstrates benefits for Poisson reconstruction and denoising. The approach offers practical improvements for downstream 3D reconstruction and analysis, particularly in noisy or varying-density point clouds.

Abstract

In recent years, point cloud normal estimation, as a classical and foundational algorithm, has garnered extensive attention in the field of 3D geometric processing. Despite the remarkable performance achieved by current Neural Network-based methods, their robustness is still influenced by the quality of training data and the models' performance. In this study, we designed a fundamental framework for normal estimation, enhancing existing model through the incorporation of global information and various constraint mechanisms. Additionally, we employed a confidence-based strategy to select the reasonable samples for fair and robust network training. The introduced sample confidence can be integrated into the loss function to balance the influence of different samples on model training. Finally, we utilized existing orientation methods to correct estimated non-oriented normals, achieving state-of-the-art performance in both oriented and non-oriented tasks. Extensive experimental results demonstrate that our method works well on the widely used benchmarks.

Refining 3D Point Cloud Normal Estimation via Sample Selection

TL;DR

The paper addresses the robustness of point cloud normal estimation under noisy and imperfect training data by introducing a two-branch network that fuses local and global context. It adds confidence-based sample weighting to its loss to suppress corrupt patches and adopts an orientation-refinement step using NeuralGF to achieve both accurate and consistently oriented normals. Through extensive experiments on synthetic and real datasets, it reports state-of-the-art performance for unoriented and oriented normals and demonstrates benefits for Poisson reconstruction and denoising. The approach offers practical improvements for downstream 3D reconstruction and analysis, particularly in noisy or varying-density point clouds.

Abstract

In recent years, point cloud normal estimation, as a classical and foundational algorithm, has garnered extensive attention in the field of 3D geometric processing. Despite the remarkable performance achieved by current Neural Network-based methods, their robustness is still influenced by the quality of training data and the models' performance. In this study, we designed a fundamental framework for normal estimation, enhancing existing model through the incorporation of global information and various constraint mechanisms. Additionally, we employed a confidence-based strategy to select the reasonable samples for fair and robust network training. The introduced sample confidence can be integrated into the loss function to balance the influence of different samples on model training. Finally, we utilized existing orientation methods to correct estimated non-oriented normals, achieving state-of-the-art performance in both oriented and non-oriented tasks. Extensive experimental results demonstrate that our method works well on the widely used benchmarks.

Paper Structure

This paper contains 18 sections, 13 equations, 8 figures, 5 tables.

Figures (8)

  • Figure 1: Explanation of how corrupt noise samples affect network training. (A) The top partillustrates the differences between low-level and high-level patches regarding their underlying surfaces. In high noise scenarios, local patches may lack clear surface patterns. Additionally, compared to patches with low noise levels, those with high noise may show notable differences between the normals of query points and those of the nearest points on the surface. These instances, known as corrupt samples, can weaken the model's robustness during training. (B) The table outlines four training strategies: A utilizes the entire PCPNET dataset, B exclusively uses clean data, C corrects normals from each point's underlying surface, and D employs our confidence-based training method. It shows that relying solely on clean data and corrected normals isn't ideal due to insufficient training data, resulting in reduced model robustness. (C) This part illustrates how the sampling range varies based on confidence values, which decrease as the noise scale increases.
  • Figure 2: The learning pipeline of our method. Data preprocessing: local patches of query points and globally sampled patches based on probabilities are initialized after PAC processing. Network architecture: shared QSTN aligns multiple branch inputs, while NeuralGF method rectifies normal orientation. Sample selection: two strategies for estimating confidence values are presented, with the surface-based confidence assessing the distance of each point to the potential surface, and the normal-based confidence evaluating the disparity between the normals of each point and those of the potential surface.
  • Figure 3: AUC on the PCPNet and FamousShape dataset. X and Y axes are the angle threshold and the percentage of good point (PGP) normals.
  • Figure 4: Visualization comparisons on the PCPNet and FamousShape datasets, with numbers indicating RMSEs. The angle error is visualized using a heatmap.
  • Figure 5: Qualitative comparisons on the SceneNN datasets (Noise: $\sigma$ = 0.3$\%$).
  • ...and 3 more figures