Table of Contents
Fetching ...

Wired for Overconfidence: A Mechanistic Perspective on Inflated Verbalized Confidence in LLMs

Tianyi Zhao, Yinhan He, Wendy Zheng, Yujie Zhang, Chen Chen

Abstract

Large language models are often not just wrong, but \emph{confidently wrong}: when they produce factually incorrect answers, they tend to verbalize overly high confidence rather than signal uncertainty. Such verbalized overconfidence can mislead users and weaken confidence scores as a reliable uncertainty signal, yet its internal mechanisms remain poorly understood. We present a circuit-level mechanistic analysis of this inflated verbalized confidence in LLMs, organized around three axes: capturing verbalized confidence as a differentiable internal signal, identifying the circuits that causally inflate it, and leveraging these insights for targeted inference-time recalibration. Across two instruction-tuned LLMs on three datasets, we find that a compact set of MLP blocks and attention heads, concentrated in middle-to-late layers, consistently writes the confidence-inflation signal at the final token position. We further show that targeted inference-time interventions on these circuits substantially improve calibration. Together, our results suggest that verbalized overconfidence in LLMs is driven by identifiable internal circuits and can be mitigated through targeted intervention.

Wired for Overconfidence: A Mechanistic Perspective on Inflated Verbalized Confidence in LLMs

Abstract

Large language models are often not just wrong, but \emph{confidently wrong}: when they produce factually incorrect answers, they tend to verbalize overly high confidence rather than signal uncertainty. Such verbalized overconfidence can mislead users and weaken confidence scores as a reliable uncertainty signal, yet its internal mechanisms remain poorly understood. We present a circuit-level mechanistic analysis of this inflated verbalized confidence in LLMs, organized around three axes: capturing verbalized confidence as a differentiable internal signal, identifying the circuits that causally inflate it, and leveraging these insights for targeted inference-time recalibration. Across two instruction-tuned LLMs on three datasets, we find that a compact set of MLP blocks and attention heads, concentrated in middle-to-late layers, consistently writes the confidence-inflation signal at the final token position. We further show that targeted inference-time interventions on these circuits substantially improve calibration. Together, our results suggest that verbalized overconfidence in LLMs is driven by identifiable internal circuits and can be mitigated through targeted intervention.

Paper Structure

This paper contains 16 sections, 12 equations, 9 figures, 6 tables.

Figures (9)

  • Figure 1: Left: Two-step elicitation. The model answers a factual question, then self-reports confidence as an integer (0–99). Center: Truth-injection counterfactual design. For each confidently-wrong record, the clean prompt retains the model's incorrect answer, while the corrupted prompt replaces it with the ground truth, keeping all other tokens identical. Right: $\Delta$TSLD distributions and $\Delta$TSLD vs. $\Delta$confidence scatter plots for Qwen2.5-3B (top) and Llama-3.2-3B (bottom) on PopQA.
  • Figure 2: Attribution heatmaps across all six model$\times$dataset configurations.
  • Figure 3: (a) Faithfulness as a function of retained top-$k$ edges. (b) TSLD reduction when each of the top-10 components is individually ablated.
  • Figure 4: Incremental component ablation.
  • Figure 5: Reliability curves for Llama-3.2-3B-Instruct across all three datasets under baseline (before intervention), mean ablation, and steering at $\alpha \in \{0.2, 0.3, 0.4, 0.5, 0.6, 0.7\}$.
  • ...and 4 more figures