Table of Contents
Fetching ...

SGS-Intrinsic: Semantic-Invariant Gaussian Splatting for Sparse-View Indoor Inverse Rendering

Jiahao Niu, Rongjia Zheng, Wenju Xu, Wei-Shi Zheng, Qing Zhang

Abstract

We present SGS-Intrinsic, an indoor inverse rendering framework that works well for sparse-view images. Unlike existing 3D Gaussian Splatting (3DGS) based methods that focus on object-centric reconstruction and fail to work under sparse view settings, our method allows to achieve high-quality geometry reconstruction and accurate disentanglement of material and illumination. The core idea is to construct a dense and geometry-consistent Gaussian semantic field guided by semantic and geometric priors, providing a reliable foundation for subsequent inverse rendering. Building upon this, we perform material-illumination disentanglement by combining a hybrid illumination model and material prior to effectively capture illumination-material interactions. To mitigate the impact of cast shadows and enhance the robustness of material recovery, we introduce illumination-invariant material constraint together with a deshadowing model. Extensive experiments on benchmark datasets show that our method consistently improves both reconstruction fidelity and inverse rendering quality over existing 3DGS-based inverse rendering approaches. Our code is available at https://github.com/GrumpySloths/SGS_Intrinsic.github.io.

SGS-Intrinsic: Semantic-Invariant Gaussian Splatting for Sparse-View Indoor Inverse Rendering

Abstract

We present SGS-Intrinsic, an indoor inverse rendering framework that works well for sparse-view images. Unlike existing 3D Gaussian Splatting (3DGS) based methods that focus on object-centric reconstruction and fail to work under sparse view settings, our method allows to achieve high-quality geometry reconstruction and accurate disentanglement of material and illumination. The core idea is to construct a dense and geometry-consistent Gaussian semantic field guided by semantic and geometric priors, providing a reliable foundation for subsequent inverse rendering. Building upon this, we perform material-illumination disentanglement by combining a hybrid illumination model and material prior to effectively capture illumination-material interactions. To mitigate the impact of cast shadows and enhance the robustness of material recovery, we introduce illumination-invariant material constraint together with a deshadowing model. Extensive experiments on benchmark datasets show that our method consistently improves both reconstruction fidelity and inverse rendering quality over existing 3DGS-based inverse rendering approaches. Our code is available at https://github.com/GrumpySloths/SGS_Intrinsic.github.io.

Paper Structure

This paper contains 20 sections, 13 equations, 16 figures, 7 tables, 1 algorithm.

Figures (16)

  • Figure 1: Sparse-view indoor inverse rendering. Our method achieves high-quality scene-level disentanglement of illumination and material properties from sparse-view input images. As can be seen, our method allows to produce high-quality novel-view PBR rendering, and also outperforms previous methods, e.g., GeoSplat ye2025geosplatting, IRGS gu2025irgs, R3DG gao2024relightable, SVGIR sun2025svg, and GSIR liang2024gs on the Interiorverse dataset zhu2022learning.
  • Figure 2: Overview of our method. There are two-stage training in our framework. In stage I, we leverage a pretrained VGGT model to get a dense scene layout point cloud. Next, the geometry of the 3D Gaussians is supervised by normal and semantic priors distilled from a pretrained model. In state II, we perform inverse rendering based on the Gaussians obtained from the first stage. A mixture light model together with a deshadowing module are employed to model illumination and occlusion relationships, which are then integrated with the Gaussian-rendered G-buffer for physically based (PBR) novel view rendering. Moreover, to ensure consistent material representations during training, we incorporate an additional material consistency constraint.
  • Figure 3: Comparison of scene radiance and local-light rendering for novel views. As shown, both R3DG and SVGIR fail to accurately model local illumination and occlusion for sparse-view indoor scenes, while our method produces physically plausible rendering with smooth local light effects.
  • Figure 4: Qualitative comparison of inverse rendering on the Interiorverse dataset zhu2022learning.
  • Figure 5: Qualitative comparison of inverse rendering on the MipNeRF dataset barron2022mip.
  • ...and 11 more figures