Table of Contents
Fetching ...

Provably Contractive and High-Quality Denoisers for Convergent Restoration

Shubhi Shukla, Pravin Nair

Abstract

Image restoration, the recovery of clean images from degraded measurements, has applications in various domains like surveillance, defense, and medical imaging. Despite achieving state-of-the-art (SOTA) restoration performance, existing convolutional and attention-based networks lack stability guarantees under minor shifts in input, exposing a robustness accuracy trade-off. We develop provably contractive (global Lipschitz $< 1$) denoiser networks that considerably reduce this gap. Our design composes proximal layers obtained from unfolding techniques, with Lipschitz-controlled convolutional refinements. By contractivity, our denoiser guarantees that input perturbations of strength $\|δ\|\le\varepsilon$ induce at most $\varepsilon$ change at the output, while strong baselines such as DnCNN and Restormer can exhibit larger deviations under the same perturbations. On image denoising, the proposed model is competitive with unconstrained SOTA denoisers, reporting the tightest gap for a provably 1-Lipschitz model and establishing that such gaps are indeed achievable by contractive denoisers. Moreover, the proposed denoisers act as strong regularizers for image restoration that provably effect convergence in Plug-and-Play algorithms. Our results show that enforcing strict Lipschitz control does not inherently degrade output quality, challenging a common assumption in the literature and moving the field toward verifiable and stable vision models. Codes and pretrained models are available at https://github.com/SHUBHI1553/Contractive-Denoisers

Provably Contractive and High-Quality Denoisers for Convergent Restoration

Abstract

Image restoration, the recovery of clean images from degraded measurements, has applications in various domains like surveillance, defense, and medical imaging. Despite achieving state-of-the-art (SOTA) restoration performance, existing convolutional and attention-based networks lack stability guarantees under minor shifts in input, exposing a robustness accuracy trade-off. We develop provably contractive (global Lipschitz ) denoiser networks that considerably reduce this gap. Our design composes proximal layers obtained from unfolding techniques, with Lipschitz-controlled convolutional refinements. By contractivity, our denoiser guarantees that input perturbations of strength induce at most change at the output, while strong baselines such as DnCNN and Restormer can exhibit larger deviations under the same perturbations. On image denoising, the proposed model is competitive with unconstrained SOTA denoisers, reporting the tightest gap for a provably 1-Lipschitz model and establishing that such gaps are indeed achievable by contractive denoisers. Moreover, the proposed denoisers act as strong regularizers for image restoration that provably effect convergence in Plug-and-Play algorithms. Our results show that enforcing strict Lipschitz control does not inherently degrade output quality, challenging a common assumption in the literature and moving the field toward verifiable and stable vision models. Codes and pretrained models are available at https://github.com/SHUBHI1553/Contractive-Denoisers

Paper Structure

This paper contains 17 sections, 4 theorems, 17 equations, 40 figures, 20 tables.

Key Result

lemma 1

Let $f:\mathbb{R}^n\!\to\!\mathbb{R}$ be $f(\boldsymbol{x}) = 1/2 \|\boldsymbol{y} - \boldsymbol{x}\|^2$ for fixed $\boldsymbol{y} \in \mathbb{R}^n$, and let $g:\mathbb{R}^n\!\to\!\mathbb{R}\cup\{\infty\}$ be proper, closed, and convex. For $\alpha>0$, define Then $T_\alpha$ is $(1-\alpha)$-contractive for any $\alpha\in(0,1)$.

Figures (40)

  • Figure 3: Architecture of contractive layer $\mathrm{T}$. The layer performs a gradient step and prox-wavelet operation, followed by a scaled convolution.
  • Figure 4: Color Gaussian denoising ($\sigma=15$). Our contractive denoiser preserves fine structures similar to unconstrained models, while closely matching Restormer.
  • Figure 5: Color Gaussian denoising ($\sigma=15$). Our contractive model preserves feather and eye-ring textures similar to the best-performing Restormer.
  • Figure 6: Performance of our contractive denoiser under different PnP algorithms for deblurring (90$\%$ random sparse kernel)
  • Figure 7: PnP convergence. All $1$-Lipschitz denoisers converge; unconstrained models diverge, while ours achieves best PSNR.
  • ...and 35 more figures

Theorems & Definitions (5)

  • definition 1: Lipschitz continuity
  • lemma 1: Single FBS iteration
  • lemma 2: Lipschitz constant of compositions
  • proposition 1
  • theorem 1