Table of Contents
Fetching ...

Stochastic Generative Plug-and-Play Priors

Chicago Y. Park, Edward P. Chandler, Yuyang Hu, Michael T. McCann, Cristina Garcia-Cardona, Brendt Wohlberg, Ulugbek S. Kamilov

Abstract

Plug-and-play (PnP) methods are widely used for solving imaging inverse problems by incorporating a denoiser into optimization algorithms. Score-based diffusion models (SBDMs) have recently demonstrated strong generative performance through a denoiser trained across a wide range of noise levels. Despite their shared reliance on denoisers, it remains unclear how to systematically use SBDMs as priors within the PnP framework without relying on reverse diffusion sampling. In this paper, we establish a score-based interpretation of PnP that justifies using pretrained SBDMs directly within PnP algorithms. Building on this connection, we introduce a stochastic generative PnP (SGPnP) framework that injects noise to better leverage the expressive generative SBDM priors, thereby improving robustness in severely ill-posed inverse problems. We provide a new theory showing that this noise injection induces optimization on a Gaussian-smoothed objective and promotes escape from strict saddle points. Experiments on challenging inverse tasks, such as multi-coil MRI reconstruction and large-mask natural image inpainting, demonstrate consistent improvement over conventional PnP methods and achieve performance competitive with diffusion-based solvers.

Stochastic Generative Plug-and-Play Priors

Abstract

Plug-and-play (PnP) methods are widely used for solving imaging inverse problems by incorporating a denoiser into optimization algorithms. Score-based diffusion models (SBDMs) have recently demonstrated strong generative performance through a denoiser trained across a wide range of noise levels. Despite their shared reliance on denoisers, it remains unclear how to systematically use SBDMs as priors within the PnP framework without relying on reverse diffusion sampling. In this paper, we establish a score-based interpretation of PnP that justifies using pretrained SBDMs directly within PnP algorithms. Building on this connection, we introduce a stochastic generative PnP (SGPnP) framework that injects noise to better leverage the expressive generative SBDM priors, thereby improving robustness in severely ill-posed inverse problems. We provide a new theory showing that this noise injection induces optimization on a Gaussian-smoothed objective and promotes escape from strict saddle points. Experiments on challenging inverse tasks, such as multi-coil MRI reconstruction and large-mask natural image inpainting, demonstrate consistent improvement over conventional PnP methods and achieve performance competitive with diffusion-based solvers.

Paper Structure

This paper contains 22 sections, 4 theorems, 31 equations, 6 figures, 8 tables, 3 algorithms.

Key Result

Theorem 1

Run the SGPnP iteration in eq:iter_final_clean under Assumptions ass:smoothness--ass:denoiser_variance. Then, for any $\delta \in (0, 1)$, there exists a stepsize schedule $\{\gamma_k\}_{k\geq0}$ and number of iterations $K$ such that, with probability at least $1-\delta$, the iterates avoid strict

Figures (6)

  • Figure 1: Comparison of deterministic PnP zhang2021dpir, stochastic PnP renaud2024snore, and SGPnP on three inpainting tasks: random masks, box masks with small missing regions, and box masks with large missing regions. Deterministic PnP succeeds only on random inpainting, while stochastic PnP extends this to small box masks but fails for large missing regions. In contrast, the proposed SGPnP approach performs reliably across all three settings.
  • Figure 2: Qualitative visual comparison corresponding to the quantitative results in Table \ref{['table:comparison_table1']}. Note that for challenging box inpainting, deterministic PnP method (DPIR) and stochastic DRUNet-based PnP (SNORE) produce incomplete reconstructions, whereas SGPnP produces more plausible image completions.
  • Figure 3: Qualitative visual comparison corresponding to the quantitative results in Table \ref{['table:comparison_table2']}. Using the same score-based prior, stochastic noise injection produces more realistic completions in challenging box inpainting and improved reconstructions for accelerated MRI.
  • Figure 4: Box inpainting results obtained with SGPnP-PGM from repeated runs on the same measurements. The method produces consistent reconstructions across runs.
  • Figure 5: Additional box inpainting results on more measurements. For this challenging task, DPIR and SNORE often produce incomplete reconstructions. SDPnP-PGM improves the result but still remains incomplete in some cases, whereas SGPnP-PGM produces more plausible image completions.
  • ...and 1 more figures

Theorems & Definitions (10)

  • Definition 1
  • Theorem 1
  • Theorem 2
  • Lemma 1
  • proof
  • Definition 2
  • proof
  • proof
  • Lemma 2
  • proof