Table of Contents
Fetching ...

Partially deterministic sampling for compressed sensing with denoising guarantees

Yaniv Plan, Matthew S. Scott, Ozgur Yilmaz

Abstract

We study compressed sensing when the sampling vectors are chosen from the rows of a unitary matrix. In the literature, these sampling vectors are typically chosen randomly; the use of randomness has enabled major empirical and theoretical advances in the field. However, in practice there are often certain crucial sampling vectors, in which case practitioners will depart from the theory and sample such rows deterministically. In this work, we derive an optimized sampling scheme for Bernoulli selectors which naturally combines random and deterministic selection of rows, thus rigorously deciding which rows should be sampled deterministically. This sampling scheme provides measurable improvements in image compressed sensing for both generative and sparse priors when compared to with-replacement and without-replacement sampling schemes, as we show with theoretical results and numerical experiments. Additionally, our theoretical guarantees feature improved sample complexity bounds compared to previous works, and novel denoising guarantees in this setting.

Partially deterministic sampling for compressed sensing with denoising guarantees

Abstract

We study compressed sensing when the sampling vectors are chosen from the rows of a unitary matrix. In the literature, these sampling vectors are typically chosen randomly; the use of randomness has enabled major empirical and theoretical advances in the field. However, in practice there are often certain crucial sampling vectors, in which case practitioners will depart from the theory and sample such rows deterministically. In this work, we derive an optimized sampling scheme for Bernoulli selectors which naturally combines random and deterministic selection of rows, thus rigorously deciding which rows should be sampled deterministically. This sampling scheme provides measurable improvements in image compressed sensing for both generative and sparse priors when compared to with-replacement and without-replacement sampling schemes, as we show with theoretical results and numerical experiments. Additionally, our theoretical guarantees feature improved sample complexity bounds compared to previous works, and novel denoising guarantees in this setting.

Paper Structure

This paper contains 18 sections, 14 theorems, 63 equations, 4 figures.

Key Result

Proposition 2.7

In loc:optimized_bernoulli_weights.statement, $\blacktriangleleft$$\blacktriangleleft$

Figures (4)

  • Figure 1: We consider a fixed coherence vector $\boldsymbol{\alpha}$ across all experiments, which is the local coherences of the DFT matrix on the flower dataset nilsbackAutomatedFlowerClassification2008 as prior set ($n = 16384$). The right plot compares numerically the bound on $m$ induced by a bound of the form $m \ge L^2(\boldsymbol{\alpha}, m) \Lambda$ (see the above discussion).
  • Figure 2: The generative plot has 200 experiments for each data point, and the sparse plot has 20. Sparsity level is $k = 500$ (1%) and the code dimension of the generative model is $k = 200$ (0.5%). We display a line for geometric mean and a band for the geometric standard error (the uncertainty of the geometric mean estimator).
  • Figure 3: Comparison between optimized sampling distribution. The generative plot has 200 experiments for each data point, and the sparse plot has 20. We display a line for geometric mean and a band for the geometric standard error (the uncertainty of the geometric mean estimator). Sparsity level is $k = 500$ (1%) and the code dimension of the generative model is $k = 200$ (0.5%).
  • Figure 4: Comparison of optimized without-replacement sampling with two different preconditioners with the optimized Bernoulli sampling scheme. The label "wor" denotes optimized without-replacement sampling, with "empirical" preconditioning as in hoppeSamplingStrategiesCompressive2023, and "heuristic" preconditioning as introduced in the text. The label "Bernoulli, Ber" denotes the optimized Bernoulli sampling scheme with the Bernoulli preconditioner. The generative plot has 200 experiments for each data point, and the sparse plot has 20. We display a line for geometric mean and a band for the geometric standard error (the uncertainty of the geometric mean estimator). Note that in the Generative plot, the Bernoulli curve on the left plot is mostly underneath the "wor, heuristic" curve. Sparsity level is $k = 500$ (1%) and the code dimension of the generative model is $k = 200$ (0.5%).

Theorems & Definitions (38)

  • Definition 2.1: Local coherence
  • Definition 2.2: Bernoulli selector sampling matrix
  • Definition 2.4: Optimized Bernoulli weights
  • Remark 2.5
  • Remark 2.6
  • Proposition 2.7: Norm of the optimized probability weights
  • Theorem 2.8: Optimized Bernoulli CS on union of subspaces
  • Remark 2.9
  • Proposition 2.10: Upper bound Bernoulli L with local coherences
  • Proposition 2.11: Monotonicity of L in m
  • ...and 28 more