Table of Contents
Fetching ...

Confidence-Based Mesh Extraction from 3D Gaussians

Lukas Radl, Felix Windisch, Andreas Kurz, Thomas Köhler, Michael Steiner, Markus Steinberger

Abstract

Recently, 3D Gaussian Splatting (3DGS) greatly accelerated mesh extraction from posed images due to its explicit representation and fast software rasterization. While the addition of geometric losses and other priors has improved the accuracy of extracted surfaces, mesh extraction remains difficult in scenes with abundant view-dependent effects. To resolve the resulting ambiguities, prior works rely on multi-view techniques, iterative mesh extraction, or large pre-trained models, sacrificing the inherent efficiency of 3DGS. In this work, we present a simple and efficient alternative by introducing a self-supervised confidence framework to 3DGS: within this framework, learnable confidence values dynamically balance photometric and geometric supervision. Extending our confidence-driven formulation, we introduce losses which penalize per-primitive color and normal variance and demonstrate their benefits to surface extraction. Finally, we complement the above with an improved appearance model, by decoupling the individual terms of the D-SSIM loss. Our final approach delivers state-of-the-art results for unbounded meshes while remaining highly efficient.

Confidence-Based Mesh Extraction from 3D Gaussians

Abstract

Recently, 3D Gaussian Splatting (3DGS) greatly accelerated mesh extraction from posed images due to its explicit representation and fast software rasterization. While the addition of geometric losses and other priors has improved the accuracy of extracted surfaces, mesh extraction remains difficult in scenes with abundant view-dependent effects. To resolve the resulting ambiguities, prior works rely on multi-view techniques, iterative mesh extraction, or large pre-trained models, sacrificing the inherent efficiency of 3DGS. In this work, we present a simple and efficient alternative by introducing a self-supervised confidence framework to 3DGS: within this framework, learnable confidence values dynamically balance photometric and geometric supervision. Extending our confidence-driven formulation, we introduce losses which penalize per-primitive color and normal variance and demonstrate their benefits to surface extraction. Finally, we complement the above with an improved appearance model, by decoupling the individual terms of the D-SSIM loss. Our final approach delivers state-of-the-art results for unbounded meshes while remaining highly efficient.

Paper Structure

This paper contains 57 sections, 28 equations, 14 figures, 8 tables.

Figures (14)

  • Figure 1: Teaser: We propose a novel, confidence-based method to extract meshes from 3D Gaussians. Each Gaussian is equipped with additional confidence values that balance photometric and geometric losses in a self-supervised manner. Compared to related work, our final meshes exhibit finer details and fewer artifacts.
  • Figure 2: Analysis of the Photometric Loss Components: The luminance term of D-SSIM is highly dependent on the illumination; in this example, the luminance error is reduced by a factor of $9\times$ by considering the appearance embedding.
  • Figure 3: Confidence-Driven Gaussian Splatting: We show rendered images from our trained method, accompanied by the rendered confidence maps $\hat{C}$. The confidence maps effectively isolate reflective surfaces, thin foliage, or rarely observed areas (such as the roof for Barn). Importantly, the low-confidence regions are still well-reconstructed.
  • Figure 4: Effect of our proposed Variance Losses: Rendering color and normals only for the first-hit Gaussians shows how our variance losses align individual Gaussians better to the true object surface, in both color and orientation.
  • Figure 5: Qualitative Mesh Comparison: Our approach produces more detailed meshes with fewer artifacts than other unbounded methods guedon2025miloRadl2025SOF. Additionally, it achieves higher completeness and finer detail than bounded extraction works chen2024pgsrzhang2025qgs.
  • ...and 9 more figures