Table of Contents
Fetching ...

Horseshoe Priors and MDP

Nick Polson, Vadim Sokolov, Daniel Zantedeschi

Abstract

Carvalho (2010) established two foundational theorems for the horseshoe prior: tight two-sided logarithmic bounds on the marginal density near the origin (Theorem~1.1), and a super-efficient rate of convergence of the Bayes predictive density to the true sampling density in sparse situations (Theorem~2). The ``Shrink Globally, Act Locally'' paper \citep{polson2010shrink} formalised necessary and sufficient conditions on the prior's behaviour at the origin for sparsity adaptation as $p \to \infty$. We show that these results are not merely descriptive properties of the horseshoe -- they are the finite-sample precursors to the asymptotic moderate deviation principle (MDP) of \citet{datta2026newlook}. The log-pole singularity $\piH(θ) \asymp -\log\absθ$ is precisely the origin integrability boundary that selects the MDP threshold $\tcrit = \sqrt{\log(πn/2)}$; super-efficiency below the threshold and tail robustness above it together produce the ABOS Bayes risk $p_0 \log(p/p_0)/n$; and the Clarke--Barron information-theoretic asymptotics of Bayes methods provide the unifying framework in which all three results are faces of a single logarithmic budget principle.

Horseshoe Priors and MDP

Abstract

Carvalho (2010) established two foundational theorems for the horseshoe prior: tight two-sided logarithmic bounds on the marginal density near the origin (Theorem~1.1), and a super-efficient rate of convergence of the Bayes predictive density to the true sampling density in sparse situations (Theorem~2). The ``Shrink Globally, Act Locally'' paper \citep{polson2010shrink} formalised necessary and sufficient conditions on the prior's behaviour at the origin for sparsity adaptation as . We show that these results are not merely descriptive properties of the horseshoe -- they are the finite-sample precursors to the asymptotic moderate deviation principle (MDP) of \citet{datta2026newlook}. The log-pole singularity is precisely the origin integrability boundary that selects the MDP threshold ; super-efficiency below the threshold and tail robustness above it together produce the ABOS Bayes risk ; and the Clarke--Barron information-theoretic asymptotics of Bayes methods provide the unifying framework in which all three results are faces of a single logarithmic budget principle.

Paper Structure

This paper contains 34 sections, 14 theorems, 52 equations, 5 tables.

Key Result

Theorem 2.1

Let $K = (2\pi^3)^{-1/2}$. The univariate horseshoe density satisfies: $\blacktriangleleft$$\blacktriangleleft$

Theorems & Definitions (20)

  • Theorem 2.1: Carvalho, Polson, Scott 2010
  • Remark 2.1: The constant $K$
  • Theorem 2.2: Carvalho, Polson, Scott 2010---super-efficiency
  • proof
  • Theorem 2.3: Polson--Scott 2010, necessary condition
  • Theorem 2.4: Polson--Scott 2010, sufficient condition
  • Proposition 2.5: Polson--Scott 2010
  • Theorem 3.1: Datta, Polson, Sokolov, Zantedeschi 2026
  • Proposition 4.1: Origin integrability boundary
  • proof
  • ...and 10 more