Table of Contents
Fetching ...

Bayesian Elicitation with LLMs: Model Size Helps, Extra "Reasoning" Doesn't Always

Luka Hobor, Mario Brcic, Mihael Kovac, Kristijan Poje

Abstract

Large language models (LLMs) have been proposed as alternatives to human experts for estimating unknown quantities with associated uncertainty, a process known as Bayesian elicitation. We test this by asking eleven LLMs to estimate population statistics, such as health prevalence rates, personality trait distributions, and labor market figures, and to express their uncertainty as 95\% credible intervals. We vary each model's reasoning effort (low, medium, high) to test whether more "thinking" improves results. Our findings reveal three key results. First, larger, more capable models produce more accurate estimates, but increasing reasoning effort provides no consistent benefit. Second, all models are severely overconfident: their 95\% intervals contain the true value only 9--44\% of the time, far below the expected 95\%. Third, a statistical recalibration technique called conformal prediction can correct this overconfidence, expanding the intervals to achieve the intended coverage. In a preliminary experiment, giving models web search access degraded predictions for already-accurate models, while modestly improving predictions for weaker ones. Models performed well on commonly discussed topics but struggled with specialized health data. These results indicate that LLM uncertainty estimates require statistical correction before they can be used in decision-making.

Bayesian Elicitation with LLMs: Model Size Helps, Extra "Reasoning" Doesn't Always

Abstract

Large language models (LLMs) have been proposed as alternatives to human experts for estimating unknown quantities with associated uncertainty, a process known as Bayesian elicitation. We test this by asking eleven LLMs to estimate population statistics, such as health prevalence rates, personality trait distributions, and labor market figures, and to express their uncertainty as 95\% credible intervals. We vary each model's reasoning effort (low, medium, high) to test whether more "thinking" improves results. Our findings reveal three key results. First, larger, more capable models produce more accurate estimates, but increasing reasoning effort provides no consistent benefit. Second, all models are severely overconfident: their 95\% intervals contain the true value only 9--44\% of the time, far below the expected 95\%. Third, a statistical recalibration technique called conformal prediction can correct this overconfidence, expanding the intervals to achieve the intended coverage. In a preliminary experiment, giving models web search access degraded predictions for already-accurate models, while modestly improving predictions for weaker ones. Models performed well on commonly discussed topics but struggled with specialized health data. These results indicate that LLM uncertainty estimates require statistical correction before they can be used in decision-making.

Paper Structure

This paper contains 23 sections, 1 equation, 3 figures, 2 tables.

Figures (3)

  • Figure 1: Negative log-likelihood and relative sharpness by model and thinking effort
  • Figure 2: Original and CP-calibrated coverage by model and reasoning effort. Raw coverage (left bars) shows severe under-coverage across all models. Conformal prediction (right bars) recovers near-nominal coverage. Groups marked with * have fewer than 15 calibration points.
  • Figure 3: Aggregated NLL, coverage, and relative sharpness by reasoning effort level. NLL and coverage do not vary significantly with effort; only sharpness increases.