Table of Contents
Fetching ...

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

Ingrid Daubechies, Michel Defrise, Christine De Mol

TL;DR

This work develops a sparsity-promoting regularization framework for linear inverse problems Kf=g by replacing the quadratic penalty with a weighted l^p penalty on basis coefficients, 1≤p≤2. It derives a surrogate-functional Landweber-type algorithm with thresholding f^n = S_{w,p}(f^{n−1}+K^*(g−Kf^{n−1})), and proves convergence in norm to a minimizer of Φ_{w,p}. The authors show that smaller p yields sparser representations, and establish stability: with a μ-regularized objective and appropriately chosen μ(ε), the regularized solution converges to the minimal-norm (or weighted-norm) solution as data noise ε→0, with rates tied to operator smoothing α and Besov-sparsity σ. An illustration on 2D deconvolution demonstrates practical gains of the sparsity-aware thresholding, and the paper discusses generalizations, including Besov priors, complex-valued data, frames, and computational considerations for large-scale problems. The results provide a general, norm-convergent sparse regularization method for inverse problems that can adapt to wavelet Besov priors and other orthonormal bases, with clear implications for imaging and signal reconstruction.

Abstract

We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

TL;DR

This work develops a sparsity-promoting regularization framework for linear inverse problems Kf=g by replacing the quadratic penalty with a weighted l^p penalty on basis coefficients, 1≤p≤2. It derives a surrogate-functional Landweber-type algorithm with thresholding f^n = S_{w,p}(f^{n−1}+K^*(g−Kf^{n−1})), and proves convergence in norm to a minimizer of Φ_{w,p}. The authors show that smaller p yields sparser representations, and establish stability: with a μ-regularized objective and appropriately chosen μ(ε), the regularized solution converges to the minimal-norm (or weighted-norm) solution as data noise ε→0, with rates tied to operator smoothing α and Besov-sparsity σ. An illustration on 2D deconvolution demonstrates practical gains of the sparsity-aware thresholding, and the paper discusses generalizations, including Besov priors, complex-valued data, frames, and computational considerations for large-scale problems. The results provide a general, norm-convergent sparse regularization method for inverse problems that can adapt to wavelet Besov priors and other orthonormal bases, with clear implications for imaging and signal reconstruction.

Abstract

We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.

Paper Structure

This paper contains 18 sections, 28 theorems, 125 equations, 3 figures.

Key Result

Proposition 2.1

Suppose the operator $K$ maps a Hilbert space $\mathcal{H}$ to another Hilbert space $\mathcal{H}'$, with $\|K^* K\| < 1$, and suppose $g$ is an element of $\mathcal{H}'$. Let $(\varphi_{\gamma})_{\gamma \in \Gamma}$ be an orthonormal basis for $\mathcal{H}$, and let $\mathbf{w}=(w_{\gamma})_{\gamma Then ${\Phi}^{^{\hbox{\tiny{\it{SUR}}}}}_{\mathbf{w},p}(f ; a)$ has a unique minimizer in $\mathcal

Figures (3)

  • Figure 1: The object $f$ (top left), the image $g$ after convolving with a radially symmetrical low-pass filter and adding pseudo-random Poisson noise (top right), and the minimizers of $\Phi_{\mu_1 \mathbf{w}_0,1}$ (bottom left) and $\Phi_{\mu_2 \mathbf{w}_0,2}$ (bottom right). The values $\mu_1=0.001$ and $\mu_2=0.0001$ have been selected separately for the $\ell^1$- and $\ell^2$-cases, to obtain a balance between sharpness and ringing and noise.
  • Figure 2: A comparison to illustrate the impact of the positivity constraint, imposed at every iteration step. On the left are the fixed points for $P_{\cal C} \mathbf{S}_{\mu_1 \mathbf{w}_0,1}$ (top) and $\mathbf{S}_{\mu_1 \mathbf{w}_0,1}$ (bottom); on the right those of $P_{\cal C} \mathbf{S}_{\mu_2 \mathbf{w}_0,2}$ (top) and $\mathbf{S}_{\mu_2 \mathbf{w}_0,2}$ (bottom). The data and the values of $\mu_1,~\mu_2$ are the same as in Figure 1.
  • Figure 3: A different view of the four solutions in Figure 2, with a different dynamic range for the image intensity gray scale, to highlight ringing and other artifacts.

Theorems & Definitions (37)

  • Proposition 2.1
  • Lemma 2.2
  • Corollary 2.3
  • Remark 2.4
  • Remark 2.5
  • Theorem 3.1
  • Theorem 3.2
  • Lemma 3.3
  • Lemma 3.4
  • Lemma 3.5
  • ...and 27 more