Table of Contents
Fetching ...

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Emmanuel Candes, Justin Romberg, Terence Tao

TL;DR

The paper proves that sparse or approximately sparse signals can be stably recovered from incomplete and noisy measurements by solving convex $\ell_1$-minimization problems, provided the measurement matrix satisfies a restricted isometry-type condition. It establishes deterministic stability bounds: exact recovery in the noiseless case and error bounds proportional to the noise level in the noisy case, with explicit constants depending on RIP parameters. The results cover Gaussian, Fourier, and other random or structured measurement ensembles, and extend to compressible signals and image-like data via wavelets or total-variation models. Numerical experiments on 1D signals and a 2D image corroborate the theory, showing recovery errors close to the combined approximation and perturbation errors and highlighting practical reconstruction strategies.

Abstract

Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

Stable Signal Recovery from Incomplete and Inaccurate Measurements

TL;DR

The paper proves that sparse or approximately sparse signals can be stably recovered from incomplete and noisy measurements by solving convex -minimization problems, provided the measurement matrix satisfies a restricted isometry-type condition. It establishes deterministic stability bounds: exact recovery in the noiseless case and error bounds proportional to the noise level in the noisy case, with explicit constants depending on RIP parameters. The results cover Gaussian, Fourier, and other random or structured measurement ensembles, and extend to compressible signals and image-like data via wavelets or total-variation models. Numerical experiments on 1D signals and a 2D image corroborate the theory, showing recovery errors close to the combined approximation and perturbation errors and highlighting practical reconstruction strategies.

Abstract

Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

Paper Structure

This paper contains 10 sections, 2 theorems, 39 equations, 4 figures, 3 tables.

Key Result

Theorem 1

Let $S$ be such that $\delta_{3S} + 3 \delta_{4S} < 2$. Then for any signal $x_0$ supported on $T_0$ with $|T_0|\leq S$ and any perturbation $e$ with $\|e\|_{\ell_2} \leq \epsilon$, the solution $x^\sharp$ to $(P_2)$ obeys where the constant $C_S$ may only depend on $\delta_{4S}$. For reasonable values of $\delta_{4S}$, $C_S$ is well behaved; e.g. $C_S \approx 8.82$ for $\delta_{4S} = 1/5$ and $C

Figures (4)

  • Figure 1: Geometry in $\mathbb{R}^2$. Here, the point $x_0$ is a vertex of the $\ell_1$ ball and the shaded area represents the set of points obeying both the tube and the cone constraints. By showing that every vector in the cone of descent at $x_0$ is approximately orthogonal to the nullspace of $A$, we will ensure that $x^\sharp$ is not too far from $x_0$.
  • Figure 2: (a) Example of a sparse signal used in the 1D experiments. There are $50$ non-zero coefficients taking values $\pm 1$. (b) Sparse signal recovered from noisy measurements with $\sigma = 0.05$. (c) Example of a compressible signal used in the 1D experiments. (d) Compressible signal recovered from noisy measurements with $\sigma=0.05$.
  • Figure 3: (a) Original $256\times 256$ Boats image. (b) Recovery via $(TV)$ from $n=25000$ measurements corrupted with Gaussian noise. (c) Recovery via $(TV)$ from $n=25000$ measurements corrupted by round-off error. In both cases, the reconstruction error is less than the sum of the nonlinear approximation and measurement errors.
  • Figure 4: (a) Noiseless measurements $Ax_0$ of the Boats image. (b) Gaussian measurement error with $\sigma = 5\cdot 10^{-4}$ in the recovery experiment summarized in the left column of Table \ref{['tab:image_results']}. The signal-to-noise ratio is $\|Ax_0\|_{\ell_2}/\|e\|_{\ell_2} = 4.5$. (c) Round-off error in the recovery experiment summarized in the right column of Table \ref{['tab:image_results']}. The signal-to-noise ratio is $\|Ax_0\|_{\ell_2}/\|e\|_{\ell_2} = 4.3$.

Theorems & Definitions (2)

  • Theorem 1
  • Theorem 2