Table of Contents
Fetching ...

Convergence analysis of a stochastic heavy-ball method for linear ill-posed problems

Qinian Jin, Yanjun Liu

Abstract

In this paper we consider a stochastic heavy-ball method for solving linear ill-posed inverse problems. With suitable choices of the step-sizes and the momentum coefficients, we establish the regularization property of the method under {\it a priori} selection of the stopping index and derive the rate of convergence under a benchmark source condition on the sought solution. Numerical results are provided to test the performance of the method.

Convergence analysis of a stochastic heavy-ball method for linear ill-posed problems

Abstract

In this paper we consider a stochastic heavy-ball method for solving linear ill-posed inverse problems. With suitable choices of the step-sizes and the momentum coefficients, we establish the regularization property of the method under {\it a priori} selection of the stopping index and derive the rate of convergence under a benchmark source condition on the sought solution. Numerical results are provided to test the performance of the method.

Paper Structure

This paper contains 8 sections, 11 theorems, 128 equations, 4 figures, 2 algorithms.

Key Result

Lemma 2.1

Consider the sequences $\{x_n^\delta\}$ and $\{x_n\}$ defined by Algorithm alg:SHB using noisy data and exact data respectively. Assume that $0<\eta_i <1/\|A_i\|^2$ for $i = 1, \cdots, p$. Then for all integers $n \ge 0$ there holds where $\bar{\eta}:= \max\{\eta_i: i = 1, \cdots, p\}$ and $c_0:= \min\{1-\eta_i \|A_i\|^2: i = 1, \cdots, p\}$.

Figures (4)

  • Figure 1: Illustration of the semi-convergence of Algorithm \ref{['alg:SHB']} using noisy data with various relative noise levels
  • Figure 2: Illustration of the effect for $\eta_{i_n}$ chosen by (\ref{['DP']}) which incorporates the discrepancy principle.
  • Figure 3: Comparison between Algorithm \ref{['alg:SHB']} and the stochastic gradient descent method (\ref{['SGD']}).
  • Figure 4: Relative reconstruction error by Algorithm \ref{['alg:SHB2']}. entropy: $\eta_{i_n} = 0.98/\|A\|^2$; entropy-DP: $\eta_{i_n}$ is chosen by (\ref{['DP']} with $\mu_0 = 0.98$.

Theorems & Definitions (24)

  • Lemma 2.1
  • proof
  • Lemma 2.2
  • proof
  • Lemma 2.3
  • proof
  • Lemma 2.4
  • proof
  • Theorem 2.5
  • proof
  • ...and 14 more