Table of Contents
Fetching ...

Decoding by Linear Programming

Emmanuel Candes, Terence Tao

TL;DR

The paper shows that exact recovery of a signal f from corrupted measurements y = Af + e is possible via ℓ1 minimization when the error e is sparse and the coding matrix A (or its annihilator F) satisfies restricted orthogonality conditions. A dual certificate approach proves uniqueness of the ℓ1 minimizer, with deterministic guarantees under δ_S and θ_{S,S'} constraints, and Gaussian matrices are shown to satisfy these conditions with high probability for small sparsity levels. Numerical experiments confirm robust exact recovery up to substantial fractions of corrupted outputs, and connections to optimal recovery demonstrate near-optimal performance for compressible signals using the same linear programming framework. The results unify deterministic and probabilistic perspectives, extend to general coding matrices, and suggest practical decoding strategies with broad relevance to compressed sensing and error correction.

Abstract

This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector $f \in \R^n$ from corrupted measurements $y = A f + e$. Here, $A$ is an $m$ by $n$ (coding) matrix and $e$ is an arbitrary and unknown vector of errors. Is it possible to recover $f$ exactly from the data $y$? We prove that under suitable conditions on the coding matrix $A$, the input $f$ is the unique solution to the $\ell_1$-minimization problem ($\|x\|_{\ell_1} := \sum_i |x_i|$) $$ \min_{g \in \R^n} \| y - Ag \|_{\ell_1} $$ provided that the support of the vector of errors is not too large, $\|e\|_{\ell_0} := |\{i : e_i \neq 0\}| \le ρ\cdot m$ for some $ρ> 0$. In short, $f$ can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; $f$ is recovered exactly even in situations where a significant fraction of the output is corrupted.

Decoding by Linear Programming

TL;DR

The paper shows that exact recovery of a signal f from corrupted measurements y = Af + e is possible via ℓ1 minimization when the error e is sparse and the coding matrix A (or its annihilator F) satisfies restricted orthogonality conditions. A dual certificate approach proves uniqueness of the ℓ1 minimizer, with deterministic guarantees under δ_S and θ_{S,S'} constraints, and Gaussian matrices are shown to satisfy these conditions with high probability for small sparsity levels. Numerical experiments confirm robust exact recovery up to substantial fractions of corrupted outputs, and connections to optimal recovery demonstrate near-optimal performance for compressible signals using the same linear programming framework. The results unify deterministic and probabilistic perspectives, extend to general coding matrices, and suggest practical decoding strategies with broad relevance to compressed sensing and error correction.

Abstract

This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector from corrupted measurements . Here, is an by (coding) matrix and is an arbitrary and unknown vector of errors. Is it possible to recover exactly from the data ? We prove that under suitable conditions on the coding matrix , the input is the unique solution to the -minimization problem () provided that the support of the vector of errors is not too large, for some . In short, can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; is recovered exactly even in situations where a significant fraction of the output is corrupted.

Paper Structure

This paper contains 19 sections, 10 theorems, 80 equations, 3 figures.

Key Result

Lemma 1.2

We have $\theta_{S,S'} \leq \delta_{S+S'} \leq \theta_{S,S'} + \max(\delta_S,\delta_{S'})$ for all $S$, $S'$.

Figures (3)

  • Figure 1: Behavior of the upper bound $\rho_{p/m}(r)$ for three values of the ratio $p/m$, namely, $p/m = 3/4, 2/3, 1/2$.
  • Figure 2: $\ell_1$-recovery of an input signal from $y = Af + e$ with $A$ an $m$ by $n$ matrix with independent Gaussian entries. In this experiment, we 'oversample' the input signal by a factor 2 so that $m = 2n$. (a) Success rate of $(P_1)$ for $m = 512$. (b) Success rate of $(P_1)$ for $m = 1024$. Observe the similar pattern and cut-off point. In these experiments, exact recovery occurs as long as about 17% or less of the entries are corrupted.
  • Figure 3: $\ell_1$-recovery of an input signal from $y = Af + e$ with $A$ an $m$ by $n$ matrix with independent Gaussian entries. In this experiment, we 'oversample' the input signal by a factor 4 so that $m = 4n$. In these experiments, exact recovery occurs as long as about 34% or less of the entries are corrupted.

Theorems & Definitions (11)

  • Definition 1.1: Restricted isometry constants
  • Lemma 1.2
  • Lemma 1.3
  • Theorem 1.4
  • Theorem 1.5
  • Theorem 1.6
  • Corollary 1.7
  • Lemma 2.1: Dual sparse reconstruction property, $\ell_2$ version
  • Lemma 2.2: Dual sparse reconstruction property, $\ell_\infty$ version
  • Lemma 3.1
  • ...and 1 more