Table of Contents
Fetching ...

Near-Optimal Parallel Approximate Counting via Sampling

David G. Harris, Vladimir Kolmogorov, Hongyang Liu, Yitong Yin, Yiyao Zhang

Abstract

The computational equivalence between approximate counting and sampling is well established for polynomial-time algorithms. The most efficient general reduction from counting to sampling is achieved via simulated annealing, where the counting problem is formulated in terms of estimating the ratio $Q={Z(β_{\max})}/{Z(β_{\min})}$ between partition functions $Z(β)=\sum_{x\in Ω} \exp(βH(x))$ of Gibbs distributions $μ_β$ over $Ω$ with Hamiltonian $H$, given access to a sampling oracle that produces samples from $μ_β$ for $β\in [β_{\min}, β_{\max}]$. The best bound achieved by known annealing algorithms with relative error $\varepsilon$ is $O(q \log h / \varepsilon^2)$, where $q, h$ are parameters which respectively bound $\ln Q$ and $H$. However, all known algorithms attaining this near-optimal complexity are inherently sequential, or *adaptive*: the queried parameters $β$ depend on previous samples. We develop a simple non-adaptive algorithm for approximate counting using $O(q \log^2 h / \varepsilon^2)$ samples, as well as an algorithm that achieves $O(q \log h / \varepsilon^2)$ samples with just two rounds of adaptivity, matching the best sample complexity of sequential algorithms. These algorithms naturally give rise to work-efficient parallel (RNC) counting algorithms. We discuss applications to RNC counting algorithms for several classic models, including the anti-ferromagnetic 2-spin, monomer-dimer and ferromagnetic Ising models.

Near-Optimal Parallel Approximate Counting via Sampling

Abstract

The computational equivalence between approximate counting and sampling is well established for polynomial-time algorithms. The most efficient general reduction from counting to sampling is achieved via simulated annealing, where the counting problem is formulated in terms of estimating the ratio between partition functions of Gibbs distributions over with Hamiltonian , given access to a sampling oracle that produces samples from for . The best bound achieved by known annealing algorithms with relative error is , where are parameters which respectively bound and . However, all known algorithms attaining this near-optimal complexity are inherently sequential, or *adaptive*: the queried parameters depend on previous samples. We develop a simple non-adaptive algorithm for approximate counting using samples, as well as an algorithm that achieves samples with just two rounds of adaptivity, matching the best sample complexity of sequential algorithms. These algorithms naturally give rise to work-efficient parallel (RNC) counting algorithms. We discuss applications to RNC counting algorithms for several classic models, including the anti-ferromagnetic 2-spin, monomer-dimer and ferromagnetic Ising models.

Paper Structure

This paper contains 16 sections, 38 theorems, 76 equations, 1 table, 5 algorithms.

Key Result

Theorem 1

There is a non-adaptive sampling algorithm to estimate $Q$ to relative error $\varepsilon$ with probability at least $0.7$ in $O( \frac{q \log^2 h}{\varepsilon^2})$ sample complexity. $\blacktriangleleft$$\blacktriangleleft$

Theorems & Definitions (68)

  • Theorem 1
  • Theorem 2
  • Theorem 3
  • Theorem 4
  • Theorem 5
  • Proposition 6
  • proof
  • Proposition 7
  • proof
  • Proposition 8
  • ...and 58 more