List Estimation
Nikola Zlatanov, Amin Gohari, Farzad Shahrivari, Mikhail Rudakov
Abstract
Classical estimation outputs a single point estimate of an unknown $d$-dimensional vector from an observation. In this paper, we study \emph{$k$-list estimation}, in which a single observation is used to produce a list of $k$ candidate estimates and performance is measured by the expected squared distance from the true vector to the closest candidate. We compare this centralized setting with a symmetric decentralized MMSE benchmark in which $k$ agents observe conditionally i.i.d.\ measurements and each agent outputs its own MMSE estimate. On the centralized side, we show that optimal $k$-list estimation is equivalent to fixed-rate $k$-point vector quantization of the posterior distribution and, under standard regularity conditions, admits an exact high-rate asymptotic expansion with explicit constants and decay rate $k^{-2/d}$. On the decentralized side, we derive lower bounds in terms of the small-ball behavior of the single-agent MMSE error; in particular, when the conditional error density is bounded near the origin, the benchmark distortion cannot decay faster than order $k^{-2/d}$. We further show that if the error density vanishes at the origin, then the decentralized benchmark is provably unable to match the centralized $k^{-2/d}$ exponent, whereas the centralized estimator retains that scaling. Gaussian specializations yield explicit formulas and numerical experiments corroborate the predicted asymptotic behavior. Overall, the results show that, in the scaling with $k$, one observation combined with $k$ carefully chosen candidates can be asymptotically as effective as -- and in some regimes strictly better than -- this MMSE-based decentralized benchmark with $k$ independent observations.
