Table of Contents
Fetching ...

NimbusGS: Unified 3D Scene Reconstruction under Hybrid Weather

Yanying Li, Jinyang Li, Shengfeng He, Yangyang Xu, Junyu Dong, Yong Du

Abstract

We present NimbusGS, a unified framework for reconstructing high-quality 3D scenes from degraded multi-view inputs captured under diverse and mixed adverse weather conditions. Unlike existing methods that target specific weather types, NimbusGS addresses the broader challenge of generalization by modeling the dual nature of weather: a continuous, view-consistent medium that attenuates light, and dynamic, view-dependent particles that cause scattering and occlusion. To capture this structure, we decompose degradations into a global transmission field and per-view particulate residuals. The transmission field represents static atmospheric effects shared across views, while the residuals model transient disturbances unique to each input. To enable stable geometry learning under severe visibility degradation, we introduce a geometry-guided gradient scaling mechanism that mitigates gradient imbalance during the self-supervised optimization of 3D Gaussian representations. This physically grounded formulation allows NimbusGS to disentangle complex degradations while preserving scene structure, yielding superior geometry reconstruction and outperforming task-specific methods across diverse and challenging weather conditions. Code is available at https://github.com/lyy-ovo/NimbusGS.

NimbusGS: Unified 3D Scene Reconstruction under Hybrid Weather

Abstract

We present NimbusGS, a unified framework for reconstructing high-quality 3D scenes from degraded multi-view inputs captured under diverse and mixed adverse weather conditions. Unlike existing methods that target specific weather types, NimbusGS addresses the broader challenge of generalization by modeling the dual nature of weather: a continuous, view-consistent medium that attenuates light, and dynamic, view-dependent particles that cause scattering and occlusion. To capture this structure, we decompose degradations into a global transmission field and per-view particulate residuals. The transmission field represents static atmospheric effects shared across views, while the residuals model transient disturbances unique to each input. To enable stable geometry learning under severe visibility degradation, we introduce a geometry-guided gradient scaling mechanism that mitigates gradient imbalance during the self-supervised optimization of 3D Gaussian representations. This physically grounded formulation allows NimbusGS to disentangle complex degradations while preserving scene structure, yielding superior geometry reconstruction and outperforming task-specific methods across diverse and challenging weather conditions. Code is available at https://github.com/lyy-ovo/NimbusGS.

Paper Structure

This paper contains 24 sections, 21 equations, 10 figures, 13 tables, 1 algorithm.

Figures (10)

  • Figure 1: Overview of NimbusGS. Starting from a geometry initialization, transient particle effects are separated as per-view residuals. CSM estimates an extinction field from which transmission and airlight are derived, blended with the scene rendering and residuals to reproduce the degradations. This self-supervised process guides the Gaussian representation toward a clean and consistent reconstruction.
  • Figure 2: Qualitative results on hazy scenes. Best viewed zoomed in.
  • Figure 3: Qualitative results on rainy scenes. Best viewed zoomed in.
  • Figure 4: Qualitative results on snowy scenes. Best viewed zoomed in.
  • Figure 5: Qualitative results on hybrid-weather scenes (haze, rain, and snow). Best viewed zoomed in.
  • ...and 5 more figures