Table of Contents
Fetching ...

A remark on an error analysis for classical and learned Tikhonov regularization schemes

Arne Behrens, Meira Iske, Ming Jiang, Peter Maass, Sebastian Neumayer

Abstract

This paper presents an error analysis of classical and learned Tikhonov regularization schemes for inverse problems. We first demonstrate, both theoretically and numerically, that using a fixed regularization parameter across varying noise levels-which is a common miss-specification in practice-has only a mild impact on the reconstruction error. As a special case, we then investigate scenarios where the true data resides in an unknown finite-dimensional subspace. Here, our results lead to an empirically supported strategy for estimating the unknown dimension based on numerical experiments. Finally, we examine the approach that motivated this study: a method where a sparsity-promoting term is learned from denoising tasks and subsequently applied to general inverse problems via a simple heuristic parameter selection. The corresponding error analysis is initially developed using classical concepts and subsequently refined through a more detailed investigation of the discretized setting.

A remark on an error analysis for classical and learned Tikhonov regularization schemes

Abstract

This paper presents an error analysis of classical and learned Tikhonov regularization schemes for inverse problems. We first demonstrate, both theoretically and numerically, that using a fixed regularization parameter across varying noise levels-which is a common miss-specification in practice-has only a mild impact on the reconstruction error. As a special case, we then investigate scenarios where the true data resides in an unknown finite-dimensional subspace. Here, our results lead to an empirically supported strategy for estimating the unknown dimension based on numerical experiments. Finally, we examine the approach that motivated this study: a method where a sparsity-promoting term is learned from denoising tasks and subsequently applied to general inverse problems via a simple heuristic parameter selection. The corresponding error analysis is initially developed using classical concepts and subsequently refined through a more detailed investigation of the discretized setting.

Paper Structure

This paper contains 17 sections, 7 theorems, 62 equations, 6 figures.

Key Result

Theorem 1

Let $X,Y$ be Hilbert spaces and $A \colon X \rightarrow Y$ be compact with $\|A\| = 1$. Further, let $y^\dagger = Ax^\dagger$ for $x^\dagger \in X$ and assume that Assumption ass:1 holds. If $\alpha>0$ and $y^\delta \in Y$ with $\| y^\delta - y^\dagger\| \leq \delta$, then the reconstruction error s $\blacktriangleleft$$\blacktriangleleft$

Figures (6)

  • Figure 1: Test data $x^\dagger$ for both inverse problems together with noise-free and noise-perturbed measurements $(y,y^\delta)$ and the corresponding Tikhonov reconstruction $x^\delta_{\alpha(\delta)}$ of $y^\delta$, using $\delta = 0.01$ and $\alpha(\delta) = \delta/\rho$. (left) MNIST test image and measurement generated via $A_R$, (right) test sample from \ref{['eq:zdata_gen']} and measurements generated via $A_I$ at $n=50$.
  • Figure 2: Analytical wc and relative wc errors as in Corollary \ref{['cor:wc_error']} for $A_R$ with $\alpha(\delta) = \delta/\rho$ and ${\rho = 15.74}$.
  • Figure 3: Top: Reconstruction errors for $A_R$ with Tikhonov regularization ($\rho = 15.74$) (left), LPD (middle), and LISTA (right). We plot the errors at fixed regularization level $\bar{\delta}$ over varying $\delta$(upper row) and over varying regularization levels at fixed $\delta$(middle row) as well as the corresponding relative errors (bottom row). The dashed lines correspond to the analytical bounds as in Corollary \ref{['cor:wc_error']}. Bottom: The same plots for $A_I$, where we have $\rho = 0.58$.
  • Figure 4: Reconstruction errors for data in $X_{8}$ and $A_I$ with Tikhonov regularization ($\rho = 0.23$) (left), LPD (middle), and LISTA (right). We plot the errors at fixed regularization level $\bar{\delta}$ over varying $\delta$(upper row) and over varying $\bar{\delta}$ at fixed $\delta$(middle row) as well as the corresponding relative errors (bottom row). The dashed lines correspond to the analytical bounds as in Corollary \ref{['cor:wc_error']}.
  • Figure 5: Reconstruction errors using truncated Tikhonov regularization for simulated data (top) and MNIST (bottom) plotted against varying noise levels defined by $\delta$. In all subplots, we fix one basis for reconstruction and data generation.
  • ...and 1 more figures

Theorems & Definitions (20)

  • Theorem 1
  • proof
  • Remark 2
  • Corollary 3
  • proof
  • Remark 4
  • Theorem 5
  • proof
  • Remark 6
  • Corollary 7
  • ...and 10 more