Table of Contents
Fetching ...

Failing to Falsify: Evaluating and Mitigating Confirmation Bias in Language Models

Ayush Rajesh Jhaveri, Anthony GX-Chen, Ilia Sucholutsky, Eunsol Choi

Abstract

Confirmation bias, the tendency to seek evidence that supports rather than challenges one's belief, hinders one's reasoning ability. We examine whether large language models (LLMs) exhibit confirmation bias by adapting the rule-discovery study from human psychology: given a sequence of three numbers (a "triple"), an agent engages in an interactive feedback loop where it (1) proposes a new triple, (2) receives feedback on whether it satisfies the hidden rule, and (3) guesses the rule. Across eleven LLMs of multiple families and scales, we find that LLMs exhibit confirmation bias, often proposing triples to confirm their hypothesis rather than trying to falsify it. This leads to slower and less frequent discovery of the hidden rule. We further explore intervention strategies (e.g., encouraging the agent to consider counter examples) developed for humans. We find prompting LLMs with such instruction consistently decreases confirmation bias in LLMs, improving rule discovery rates from 42% to 56% on average. Lastly, we mitigate confirmation bias by distilling intervention-induced behavior into LLMs, showing promising generalization to a new task, the Blicket test. Our work shows that confirmation bias is a limitation of LLMs in hypothesis exploration, and that it can be mitigated via injecting interventions designed for humans.

Failing to Falsify: Evaluating and Mitigating Confirmation Bias in Language Models

Abstract

Confirmation bias, the tendency to seek evidence that supports rather than challenges one's belief, hinders one's reasoning ability. We examine whether large language models (LLMs) exhibit confirmation bias by adapting the rule-discovery study from human psychology: given a sequence of three numbers (a "triple"), an agent engages in an interactive feedback loop where it (1) proposes a new triple, (2) receives feedback on whether it satisfies the hidden rule, and (3) guesses the rule. Across eleven LLMs of multiple families and scales, we find that LLMs exhibit confirmation bias, often proposing triples to confirm their hypothesis rather than trying to falsify it. This leads to slower and less frequent discovery of the hidden rule. We further explore intervention strategies (e.g., encouraging the agent to consider counter examples) developed for humans. We find prompting LLMs with such instruction consistently decreases confirmation bias in LLMs, improving rule discovery rates from 42% to 56% on average. Lastly, we mitigate confirmation bias by distilling intervention-induced behavior into LLMs, showing promising generalization to a new task, the Blicket test. Our work shows that confirmation bias is a limitation of LLMs in hypothesis exploration, and that it can be mitigated via injecting interventions designed for humans.

Paper Structure

This paper contains 14 sections, 2 equations, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Confirmation bias leads to narrow exploration. We show two trajectories for rule discovery task, where an agent aims to infer a hidden numerical rule over multiple turns. Starting from an initial triple, the agent guess a hypothesis and test a new triple, receiving binary feedback on whether the proposed triple satisfies the hidden rule. A compatible test is consistent with the agent’s current hypothesis, whereas an incompatible test contradicts it. Trajectory 2 proposes compatible tests in both turns, showing confirmation bias. In contrast, Trajectory 1 introduces incompatible test in the second turn, allowing elimination of incorrect hypothesis and lead to correct hidden rule discovery.
  • Figure 2: Confirmation bias correlates with task success. Higher I:C (more disconfirmatory testing) is associated with higher task success. Each point represents a (model, variant) averaged over 80 episodes, and shaded regions show 95% confidence intervals.
  • Figure 3: Interventions shift exploration in thinking but not non-thinking models. Red denotes Baseline; blue denotes intervention runs (Dual-Goal / Think-in-Opposites). Ellipses show 95% covariance regions. Each point represents a (model, variant) averaged over 80 episodes.