Table of Contents
Fetching ...

The sharp one-dimensional convex sub-Gaussian comparison constant

Damek Davis, Sam Power

Abstract

Let $X$ be an integrable real random variable with mean zero and two-sided sub-Gaussian tail $\mathbb{P}(|X|>t)\le 2e^{-t^{2}/2}$ for all $t\ge 0$. We determine the smallest constant $c_\star$ such that $X$ is dominated in convex order by $c_\star G$, where $G$ is standard normal. Equivalently, $c_\star^2$ is the sharp one-dimensional convex sub-Gaussian comparison constant appearing in the \emph{Optimization Constants in Mathematics} repository~\cite{optimization-constants-repo}. We show that $c_\star$ is given by an explicit system of one-dimensional equations and is attained by an extremal distribution that saturates the tail constraint. Numerically, $c_\star \approx 2.30952$ (so $c_\star^2 \approx 5.33386$). We also determine the analogous sharp constant under a two-sided sub-exponential tail bound, with convex domination by a scaled Laplace law. Finally, we record two higher-dimensional consequences: a sequential tensorization principle for multivariate convex domination, and a dimension-free Gaussian comparator for the cone generated by convex ridge functions (the linear convex order).

The sharp one-dimensional convex sub-Gaussian comparison constant

Abstract

Let be an integrable real random variable with mean zero and two-sided sub-Gaussian tail for all . We determine the smallest constant such that is dominated in convex order by , where is standard normal. Equivalently, is the sharp one-dimensional convex sub-Gaussian comparison constant appearing in the \emph{Optimization Constants in Mathematics} repository~\cite{optimization-constants-repo}. We show that is given by an explicit system of one-dimensional equations and is attained by an extremal distribution that saturates the tail constraint. Numerically, (so ). We also determine the analogous sharp constant under a two-sided sub-exponential tail bound, with convex domination by a scaled Laplace law. Finally, we record two higher-dimensional consequences: a sequential tensorization principle for multivariate convex domination, and a dimension-free Gaussian comparator for the cone generated by convex ridge functions (the linear convex order).

Paper Structure

This paper contains 6 sections, 19 theorems, 80 equations.

Key Result

Theorem 1

The sharp constant in eq:def_cstar satisfies $c_\star = c_0$, where $c_0$ is defined by eq:Adef_Bdef--eq:c0def. In particular: Consequently, the one-dimensional value of the constant $C_{48}$ in optimization-constants-repo is $C_{48}^{(1)} = c_0^2$. $\blacktriangleleft$$\blacktriangleleft$

Theorems & Definitions (42)

  • Theorem 1: Sharp one-dimensional convex sub-Gaussian comparison
  • Remark 2: Numerical value
  • Remark 3: Other notions of sub-Gaussianity
  • Proposition 4: Stop-loss characterization of convex domination
  • proof
  • Lemma 5: Layer-cake for hinges
  • proof
  • Lemma 6: Tangent line lower bound
  • proof
  • Lemma 7: Monotone-ratio principle
  • ...and 32 more