Table of Contents
Fetching ...

HistoFusionNet: Histogram-Guided Fusion and Frequency-Adaptive Refinement for Nighttime Image Dehazing

Mohammad Heydari, Wei Dong, Shahram Shirani, Jun Chen, Han Zhou

Abstract

Nighttime image dehazing remains a challenging low-level vision problem due to the joint presence of haze, glow, non-uniform illumination, color distortion, and sensor noise, which often invalidate assumptions commonly used in daytime dehazing. To address these challenges, we propose HistoFusionNet, a transformer-enhanced architecture tailored for nighttime image dehazing by combining histogram-guided representation learning with frequency-adaptive feature refinement. Built upon a multi-scale encoder-decoder backbone, our method introduces histogram transformer blocks that model long-range dependencies by grouping features according to their dynamic-range characteristics, enabling more effective aggregation of similarly degraded regions under complex nighttime lighting. To further improve restoration fidelity, we incorporate a frequency-aware refinement branch that adaptively exploits complementary low- and high-frequency cues, helping recover scene structures, suppress artifacts, and enhance local details. This design yields a unified framework that is particularly well suited to the heterogeneous degradations encountered in real nighttime hazy scenes. Extensive experiments and highly competitive performance of our method on the NTIRE 2026 Nighttime Image Dehazing Challenge benchmark demonstrate the effectiveness of the proposed method. Our team ranked 1st among 22 participating teams, highlighting the robustness and competitive performance of HistoFusionNet. The code is available at: https://github.com/heydarimo/Night-Time-Dehazing

HistoFusionNet: Histogram-Guided Fusion and Frequency-Adaptive Refinement for Nighttime Image Dehazing

Abstract

Nighttime image dehazing remains a challenging low-level vision problem due to the joint presence of haze, glow, non-uniform illumination, color distortion, and sensor noise, which often invalidate assumptions commonly used in daytime dehazing. To address these challenges, we propose HistoFusionNet, a transformer-enhanced architecture tailored for nighttime image dehazing by combining histogram-guided representation learning with frequency-adaptive feature refinement. Built upon a multi-scale encoder-decoder backbone, our method introduces histogram transformer blocks that model long-range dependencies by grouping features according to their dynamic-range characteristics, enabling more effective aggregation of similarly degraded regions under complex nighttime lighting. To further improve restoration fidelity, we incorporate a frequency-aware refinement branch that adaptively exploits complementary low- and high-frequency cues, helping recover scene structures, suppress artifacts, and enhance local details. This design yields a unified framework that is particularly well suited to the heterogeneous degradations encountered in real nighttime hazy scenes. Extensive experiments and highly competitive performance of our method on the NTIRE 2026 Nighttime Image Dehazing Challenge benchmark demonstrate the effectiveness of the proposed method. Our team ranked 1st among 22 participating teams, highlighting the robustness and competitive performance of HistoFusionNet. The code is available at: https://github.com/heydarimo/Night-Time-Dehazing

Paper Structure

This paper contains 12 sections, 10 equations, 6 figures, 3 tables.

Figures (6)

  • Figure 1: Test results of our method on the NTIRE 2026 Nighttime Image Dehazing Challenge ntire2026dehazing. Our HistoFusionNet achieves the best performance among 22 participating teams and generates visually compelling outputs with faithful colors and enhanced structural details.
  • Figure 2: Overall architecture of HistoFusionNet. Our dehazing network adopts a U-shaped design with a DCNv4-based main branch and an auxiliary frequency-aware branch. Histogram transformer blocks are inserted at the bottleneck to perform dynamic-range aware global aggregation, while a lightweight frequency-adaptive refinement module is employed to enhance color fidelity and recover fine details.
  • Figure 3: Visual comparisons on NH-HAZE dataset. Compared to other models, our method exhibits higher color fidelity and effective dehazing, yielding compelling results.
  • Figure 4: Visual experiment results on NH-HAZE2 dataset. Obviously, our method demonstrates superior performance on color preservation and detail maintaining, further enhancing the overall quality of the output.
  • Figure 5: Qualitative comparison on the Dense-Haze dataset. Our method produces clearer structures, better color fidelity, and more faithful detail recovery, leading to higher overall visual quality.
  • ...and 1 more figures