Table of Contents
Fetching ...

Bilevel Programming Approach for Image Restoration Problems with Automatically Hyperparameter Selection

Hang Xie, Xuewen Li, Peili Li, Qiuyu Wang

Abstract

In optimization-based image restoration models, the correct selection of hyperparameters is crucial for achieving superior performance. However, current research typically involves manual tuning of these hyperparameters, which is highly time-consuming and often lacks accuracy. In this paper, we concentrate on the automated selection of hyperparameters in the context of image restoration and present a bilevel programming approach that can simultaneously select the optimal hyperparameters and achieve high-quality restoration results. For implementation, we reformulate the bilevel programming problem that incorporates an inequality constraint related to the difference-of-convex functions. Following this, we address a sequence of nonsmooth convex programming problems by employing a feasibility penalty function along with a proximal point term. In this context, the nonsmooth convex programming problem uses the solution of the lower-level problem, which is derived through the alternating direction method of multipliers. Theoretically, we prove that the sequence generated by the algorithm converges to a Karush-Kuhn-Tucker stationary point of the inequality-constrained equivalent bilevel programming problem. We conduct a series of tests on both simulations and real images, which demonstrate that the proposed algorithm achieve superior restoration quality while requiring less computing time compared to other hyperparameter selection methods.

Bilevel Programming Approach for Image Restoration Problems with Automatically Hyperparameter Selection

Abstract

In optimization-based image restoration models, the correct selection of hyperparameters is crucial for achieving superior performance. However, current research typically involves manual tuning of these hyperparameters, which is highly time-consuming and often lacks accuracy. In this paper, we concentrate on the automated selection of hyperparameters in the context of image restoration and present a bilevel programming approach that can simultaneously select the optimal hyperparameters and achieve high-quality restoration results. For implementation, we reformulate the bilevel programming problem that incorporates an inequality constraint related to the difference-of-convex functions. Following this, we address a sequence of nonsmooth convex programming problems by employing a feasibility penalty function along with a proximal point term. In this context, the nonsmooth convex programming problem uses the solution of the lower-level problem, which is derived through the alternating direction method of multipliers. Theoretically, we prove that the sequence generated by the algorithm converges to a Karush-Kuhn-Tucker stationary point of the inequality-constrained equivalent bilevel programming problem. We conduct a series of tests on both simulations and real images, which demonstrate that the proposed algorithm achieve superior restoration quality while requiring less computing time compared to other hyperparameter selection methods.

Paper Structure

This paper contains 10 sections, 9 theorems, 24 equations, 5 figures, 2 tables, 1 algorithm.

Key Result

Lemma 3.1

Suppose that $\bar{x} \in \mathbb{S}_p(\lambda)$ with $\lambda \in \mathbb{R}^2_+$, and let $r = (r_1,r_2) = (\|\Psi \bar{x}\|_1, \|\bar{x}\|_{\mathrm{TV}}) \in \mathbb{R}^2_+$. Then, we have $\bar{x} \in \mathbb{S}_c(r)$. $\blacktriangleleft$$\blacktriangleleft$

Figures (5)

  • Figure 1: The ground-truth phantom image (a), the undersampled image with added noise (b), and the restored images by algorithms GS (c), RS (d), TPE (e), and IR-DCA (f).
  • Figure 2: Convergence behavior of all the tested algorithms on the phantom image: PSNR value versus computing time (a) and iterations (c); RLNE values versus computing time (b) and iterations (d).
  • Figure 3: The ground-truth brain magnetic resonance image (a), the undersampled image with added noise (b), and the restored images by algorithms GS (c), RS (d), TPE (e), and IR-DCA (f).
  • Figure 4: Convergence behavior of all the tested algorithms on a brain magnetic resonance image: PSNR value versus computing time (a) and iterations (c); RLNE values versus computing time (b) and iterations (d).
  • Figure 5: Some brain gray images used in this test.

Theorems & Definitions (12)

  • Definition 3.1: KKT point
  • Lemma 3.1
  • Lemma 3.2
  • Lemma 3.3
  • Theorem 3.1
  • Theorem 3.2
  • Lemma 3.4
  • Theorem 3.3
  • Definition 3.2
  • Definition 3.3
  • ...and 2 more