Table of Contents
Fetching ...

Rethinking Exposure Correction for Spatially Non-uniform Degradation

Ao Li, Jiawei Sun, Le Dong, Zhenyu Wang, Weisheng Dong

Abstract

Real-world exposure correction is fundamentally challenged by spatially non-uniform degradations, where diverse exposure errors frequently coexist within a single image. However, existing exposure correction methods are still largely developed under a predominantly uniform assumption. Architecturally, they typically rely on globally aggregated modulation signals that capture only the overall exposure trend. From the optimization perspective, conventional reconstruction losses are usually derived under a shared global scale, thus overlooking the spatially varying correction demands across regions. To address these limitations, we propose a new exposure correction paradigm explicitly designed for spatial non-uniformity. Specifically, we introduce a Spatial Signal Encoder to predict spatially adaptive modulation weights, which are used to guide multiple look-up tables for image transformation, together with an HSL-based compensation module for improved color fidelity. Beyond the architectural design, we propose an uncertainty-inspired non-uniform loss that dynamically allocates the optimization focus based on local restoration uncertainties, better matching the heterogeneous nature of real-world exposure errors. Extensive experiments demonstrate that our method achieves superior qualitative and quantitative performance compared with state-of-the-art methods. Code is available at https://github.com/FALALAS/rethinkingEC.

Rethinking Exposure Correction for Spatially Non-uniform Degradation

Abstract

Real-world exposure correction is fundamentally challenged by spatially non-uniform degradations, where diverse exposure errors frequently coexist within a single image. However, existing exposure correction methods are still largely developed under a predominantly uniform assumption. Architecturally, they typically rely on globally aggregated modulation signals that capture only the overall exposure trend. From the optimization perspective, conventional reconstruction losses are usually derived under a shared global scale, thus overlooking the spatially varying correction demands across regions. To address these limitations, we propose a new exposure correction paradigm explicitly designed for spatial non-uniformity. Specifically, we introduce a Spatial Signal Encoder to predict spatially adaptive modulation weights, which are used to guide multiple look-up tables for image transformation, together with an HSL-based compensation module for improved color fidelity. Beyond the architectural design, we propose an uncertainty-inspired non-uniform loss that dynamically allocates the optimization focus based on local restoration uncertainties, better matching the heterogeneous nature of real-world exposure errors. Extensive experiments demonstrate that our method achieves superior qualitative and quantitative performance compared with state-of-the-art methods. Code is available at https://github.com/FALALAS/rethinkingEC.

Paper Structure

This paper contains 21 sections, 15 equations, 9 figures, 4 tables.

Figures (9)

  • Figure 1: Motivation for Our Methodology.
  • Figure 2: Overview of the proposed framework. Given an incorrectly exposed input image, our method first performs non-uniform modulation estimation with a Spatial Signal Encoder to generate a spatially varying modulation signal. The estimated signal is then used to explicitly guide multiple 3D LUT bases for precise image transformation, yielding the restored output image. During training, an Uncertainty Estimator predicts a dense uncertainty map, which is used to construct the proposed uncertainty-inspired non-uniform loss $\mathcal{L}_{UNU}$ with the ground truth image. In this way, our framework jointly models spatial non-uniformity at both the architectural and optimization levels.
  • Figure 3: Illustration of the proposed HSL-based compensation branch. The HSL representation serves two primary purposes. It provides color-aware cues to the adaptive downsampling module and is also fed into a compensation network, where it is fused with the LUT-corrected result to produce the final refined output.
  • Figure 4: Visual comparisons with state-of-the-art methods on the MSEC dataset msec.
  • Figure 5: Visual comparisons with state-of-the-art methods on the SICE dataset sice.
  • ...and 4 more figures