Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Emmanuel Candes, Terence Tao
TL;DR
This work develops a near-optimal framework for recovering finite signals from far fewer linear measurements when the signal is sparse or compressible in a power-law (weak-ℓ_p) sense. By establishing two key properties, the Uniform Uncertainty Principle and the Exact Reconstruction Principle, for several random measurement ensembles (Gaussian, binary, Fourier), the authors prove that ℓ₁-minimization recovers the signal with error scaling like (K/ log N)^{−(1/p−1/2)}. The results unify theory across multiple sensing setups and introduce the notion of universal encoding, where random projections serve as a robust encoder with near-optimal reconstruction performance without prior signal knowledge. The paper also connects these recovery guarantees to entropy methods and random-matrix theory, highlighting practical implications for imaging and compressed sensing under realistic measurement constraints.
Abstract
Suppose we are given a vector $f$ in $\R^N$. How many linear measurements do we need to make about $f$ to be able to recover $f$ to within precision $ε$ in the Euclidean ($\ell_2$) metric? Or more exactly, suppose we are interested in a class ${\cal F}$ of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy $ε$? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law (or if the coefficient sequence of $f$ in a fixed basis decays like a power-law), then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.
