Stable Signal Recovery from Incomplete and Inaccurate Measurements
Emmanuel Candes, Justin Romberg, Terence Tao
TL;DR
The paper proves that sparse or approximately sparse signals can be stably recovered from incomplete and noisy measurements by solving convex $\ell_1$-minimization problems, provided the measurement matrix satisfies a restricted isometry-type condition. It establishes deterministic stability bounds: exact recovery in the noiseless case and error bounds proportional to the noise level in the noisy case, with explicit constants depending on RIP parameters. The results cover Gaussian, Fourier, and other random or structured measurement ensembles, and extend to compressible signals and image-like data via wavelets or total-variation models. Numerical experiments on 1D signals and a 2D image corroborate the theory, showing recovery errors close to the combined approximation and perturbation errors and highlighting practical reconstruction strategies.
Abstract
Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
