Table of Contents
Fetching ...

Error Estimates for Data-driven Weakly Convex Frame-based Image Regularization

Andrea Ebner, Matthias Schwab, Markus Haltmeier

TL;DR

This paper introduces a non-linear filtered DFD method combined with a learning strategy for determining optimal non-linear filters from training data pairs that derives stability, convergence, and convergence rates with respect to the absolute symmetric Bregman distance for the learned non-linear regularizing filters.

Abstract

Inverse problems are fundamental in fields like medical imaging, geophysics, and computerized tomography, aiming to recover unknown quantities from observed data. However, these problems often lack stability due to noise and ill-conditioning, leading to inaccurate reconstructions. To mitigate these issues, regularization methods are employed, introducing constraints to stabilize the inversion process and achieve a meaningful solution. Recent research has shown that the application of regularizing filters to diagonal frame decompositions (DFD) yields regularization methods. These filters dampen some frame coefficients to prevent noise amplification. This paper introduces a non-linear filtered DFD method combined with a learning strategy for determining optimal non-linear filters from training data pairs. In our experiments, we applied this approach to the inversion of the Radon transform using 500 image-sinogram pairs from real CT scans. Although the learned filters were found to be strictly increasing, they did not satisfy the non-expansiveness condition required to link them with convex regularizers and prove stability and convergence in the sense of regularization methods in previous works. Inspired by this, the paper relaxes the non-expansiveness condition, resulting in weakly convex regularization. Despite this relaxation, we managed to derive stability, convergence, and convergence rates with respect to the absolute symmetric Bregman distance for the learned non-linear regularizing filters. Extensive numerical results demonstrate the effectiveness of the proposed method in achieving stable and accurate reconstructions.

Error Estimates for Data-driven Weakly Convex Frame-based Image Regularization

TL;DR

This paper introduces a non-linear filtered DFD method combined with a learning strategy for determining optimal non-linear filters from training data pairs that derives stability, convergence, and convergence rates with respect to the absolute symmetric Bregman distance for the learned non-linear regularizing filters.

Abstract

Inverse problems are fundamental in fields like medical imaging, geophysics, and computerized tomography, aiming to recover unknown quantities from observed data. However, these problems often lack stability due to noise and ill-conditioning, leading to inaccurate reconstructions. To mitigate these issues, regularization methods are employed, introducing constraints to stabilize the inversion process and achieve a meaningful solution. Recent research has shown that the application of regularizing filters to diagonal frame decompositions (DFD) yields regularization methods. These filters dampen some frame coefficients to prevent noise amplification. This paper introduces a non-linear filtered DFD method combined with a learning strategy for determining optimal non-linear filters from training data pairs. In our experiments, we applied this approach to the inversion of the Radon transform using 500 image-sinogram pairs from real CT scans. Although the learned filters were found to be strictly increasing, they did not satisfy the non-expansiveness condition required to link them with convex regularizers and prove stability and convergence in the sense of regularization methods in previous works. Inspired by this, the paper relaxes the non-expansiveness condition, resulting in weakly convex regularization. Despite this relaxation, we managed to derive stability, convergence, and convergence rates with respect to the absolute symmetric Bregman distance for the learned non-linear regularizing filters. Extensive numerical results demonstrate the effectiveness of the proposed method in achieving stable and accurate reconstructions.

Paper Structure

This paper contains 22 sections, 11 theorems, 71 equations, 4 figures, 2 tables.

Key Result

Proposition 6

The following assertions hold:

Figures (4)

  • Figure 1: Learned filters for for different noise levels (left), different quasi-singular values (middle) and different noise types (right).
  • Figure 2: On the left filter functions of example \ref{['ex:like_learned']} for different $\alpha$ and fixed $\kappa$ are plotted, along with their corresponding regularizing functions $s_{\alpha,\lambda}$ on the right. These are inspired by the learned filter functions in Figure \ref{['fig:learned_filters']}.
  • Figure 3: Comparison of ground truth images $x^+$, filtered back projections $\operatorname{FBP}(y^\delta)$, and learned reconstructions $\mathcal{F}_{\hat{\theta}(\delta)}(y^\delta)$. Sinograms $y^\delta$ were corrupted with Gaussian noise (top two rows) and Salt$\&$Pepper noise (bottom two rows) with noise levels $\delta = 4$ (rows 1 and 3) and $\delta = 12$ (rows 2 and 4).
  • Figure 4: This plot shows a graphically illustration of the neighbouring condition \ref{['phi_inte']} and assumption \ref{['ass:A1']}. The red area is where $\varphi_{\alpha}(\kappa_\lambda,\cdot)$ should belong and the blue area is for the corresponding functional $s_{\alpha,\lambda}$. Here $\operatorname{Prox}_{q+}$ refers to the right bound of condition \ref{['phi_inte']}, namely the function $\operatorname{Prox}_{\alpha ( q_\lambda - L(\cdot)^2)/\kappa_\lambda }$, and $\operatorname{Prox}_{q-}$ refers to the left bound. The upper part of illustrates when assumption \ref{['ass:A1']} partly dominates condition \ref{['phi_inte']}. In the lower part, the scenario where assumption \ref{['ass:A1']} contributes nothing beyond condition \ref{['phi_inte']} is depicted.

Theorems & Definitions (34)

  • Definition 1: Diagonal Frame Decomposition, DFD
  • Definition 2: Non-linear Regularizing Filter
  • Definition 3: Weakly Convex Regularizing Filter
  • Definition 4: Weakly Strictly Convex, Weakly Coercive
  • Definition 5: Generalized Proximity Operator
  • Proposition 6
  • proof
  • Definition 7: ${\boldsymbol{\kappa}}$-Regularizer
  • Lemma 8
  • proof
  • ...and 24 more