Table of Contents
Fetching ...

Learning to Recorrupt: Noise Distribution Agnostic Self-Supervised Image Denoising

Brayan Monroy, Jorge Bacca, Julián Tachella

Abstract

Self-supervised image denoising methods have traditionally relied on either architectural constraints or specialized loss functions that require prior knowledge of the noise distribution to avoid the trivial identity mapping. Among these, approaches such as Noisier2Noise or Recorrupted2Recorrupted, create training pairs by adding synthetic noise to the noisy images. While effective, these recorruption-based approaches require precise knowledge of the noise distribution, which is often not available. We present Learning to Recorrupt (L2R), a noise distribution-agnostic denoising technique that eliminates the need for knowledge of the noise distribution. Our method introduces a learnable monotonic neural network that learns the recorruption process through a min-max saddle-point objective. The proposed method achieves state-of-the-art performance across unconventional and heavy-tailed noise distributions, such as log-gamma, Laplace, and spatially correlated noise, as well as signal-dependent noise models such as Poisson-Gaussian noise.

Learning to Recorrupt: Noise Distribution Agnostic Self-Supervised Image Denoising

Abstract

Self-supervised image denoising methods have traditionally relied on either architectural constraints or specialized loss functions that require prior knowledge of the noise distribution to avoid the trivial identity mapping. Among these, approaches such as Noisier2Noise or Recorrupted2Recorrupted, create training pairs by adding synthetic noise to the noisy images. While effective, these recorruption-based approaches require precise knowledge of the noise distribution, which is often not available. We present Learning to Recorrupt (L2R), a noise distribution-agnostic denoising technique that eliminates the need for knowledge of the noise distribution. Our method introduces a learnable monotonic neural network that learns the recorruption process through a min-max saddle-point objective. The proposed method achieves state-of-the-art performance across unconventional and heavy-tailed noise distributions, such as log-gamma, Laplace, and spatially correlated noise, as well as signal-dependent noise models such as Poisson-Gaussian noise.

Paper Structure

This paper contains 23 sections, 57 equations, 6 figures, 6 tables.

Figures (6)

  • Figure 1: Non-Gaussian Image Denoising. L2R is capable of handling complex non-Gaussian noise models without requiring prior knowledge of the noise distribution statistics, including log-gamma, Laplace, and correlated noise.
  • Figure 2: Learning to Recorrupt. Given a noisy image $\boldsymbol{y} \sim p(\boldsymbol{y}|\boldsymbol{x})$ with unknown noise, L2R generates a recorrupted version $\boldsymbol{y}_1$ through a learned recorruptor $h$ to enable self-supervised denoising learning.
  • Figure 3: Non Gaussian Denoising. Visual comparison on representative BDSD500 crops corrupted by log–gamma, Laplace, and spatially correlated noise. Columns show the noisy input, SURE, UNSURE, R2R, NBR2NBR, and the proposed L2R, followed by the ground truth. Numbers in each panel report PSNR (dB) / SSIM. Across all noise types, L2R better suppresses structured artifacts and heavy-tailed noise while preserving edges and fine textures.
  • Figure 4: Poisson-Gaussian Denoising. Visual comparison on representative BDSD500 crops corrupted by Poisson-Gaussian Noise. Columns show the noisy input, PG-SURE, PG-UNSURE, GR2R, NBR2NBR, and the proposed L2R, followed by the ground truth. Numbers in each panel report PSNR (dB) / SSIM.
  • Figure 5: Training dynamics of the implicit correlation terms.Left: noise correlation $C_\varepsilon$ (black) and recorruption bias $C_h$ (blue) versus epoch. Right: residual gap $C_{\mathrm{\Delta}}$ (green). Both inner-product terms remain centered near zero and the gap vanishes, indicating convergence toward the desired equilibrium.
  • ...and 1 more figures