Table of Contents
Fetching ...

CogBias: Measuring and Mitigating Cognitive Bias in Large Language Models

Fan Huang, Songheng Zhang, Haewoon Kwak, Jisun An

Abstract

Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making contexts. While prior work has shown that LLMs exhibit cognitive biases behaviorally, whether these biases correspond to identifiable internal representations and can be mitigated through targeted intervention remains an open question. We define LLM cognitive bias as systematic, reproducible deviations from correct answers in tasks with computable ground-truth baselines, and introduce LLM CogBias, a benchmark organized around four families of cognitive biases: Judgment, Information Processing, Social, and Response. We evaluate three LLMs and find that cognitive biases emerge systematically across all four families, with magnitudes and debiasing responses that are strongly family-dependent: prompt-level debiasing substantially reduces Response biases but backfires for Judgment biases. Using linear probes under a contrastive design, we show that these biases are encoded as linearly separable directions in model activation space. Finally, we apply activation steering to modulate biased behavior, achieving 26--32\% reduction in bias score (fraction of biased responses) while preserving downstream capability on 25 benchmarks (Llama: negligible degradation; Qwen: up to $-$19.0pp for Judgment biases). Despite near-orthogonal bias representations across models (mean cosine similarity 0.01), steering reduces bias at similar rates across architectures ($r(246)$=.621, $p$<.001), suggesting shared functional organization.

CogBias: Measuring and Mitigating Cognitive Bias in Large Language Models

Abstract

Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making contexts. While prior work has shown that LLMs exhibit cognitive biases behaviorally, whether these biases correspond to identifiable internal representations and can be mitigated through targeted intervention remains an open question. We define LLM cognitive bias as systematic, reproducible deviations from correct answers in tasks with computable ground-truth baselines, and introduce LLM CogBias, a benchmark organized around four families of cognitive biases: Judgment, Information Processing, Social, and Response. We evaluate three LLMs and find that cognitive biases emerge systematically across all four families, with magnitudes and debiasing responses that are strongly family-dependent: prompt-level debiasing substantially reduces Response biases but backfires for Judgment biases. Using linear probes under a contrastive design, we show that these biases are encoded as linearly separable directions in model activation space. Finally, we apply activation steering to modulate biased behavior, achieving 26--32\% reduction in bias score (fraction of biased responses) while preserving downstream capability on 25 benchmarks (Llama: negligible degradation; Qwen: up to 19.0pp for Judgment biases). Despite near-orthogonal bias representations across models (mean cosine similarity 0.01), steering reduces bias at similar rates across architectures (=.621, <.001), suggesting shared functional organization.

Paper Structure

This paper contains 155 sections, 4 equations, 12 figures, 25 tables.

Figures (12)

  • Figure 1: Layer-wise probe accuracy under contrastive design across four bias families. Judgment, Information Processing, and Response families achieve near-perfect accuracy across most layers, while Social biases show lower and more variable accuracy with divergent patterns between models.
  • Figure 2: Intervention response curves across four bias families. Despite near-orthogonal steering directions (mean cosine similarity 0.01), bias reduction follows similar trajectories across models ($r(246)$=.621, $p$<.001).
  • Figure 3: Overview of LLM CogBias. We follow a Behavior--Representation--Intervention progression: (RQ1) profiling cognitive biases across four families, (RQ2) probing whether biases are encoded as linearly separable directions, and (RQ3) applying activation steering to mitigate biased behavior.
  • Figure 4: Layer-layer similarity heatmaps showing cosine similarity between bias directions $\mathbf{v}_\ell$ across layers for D1 (Judgment biases). Non-contrastive probing (panels 1, 3) shows high off-diagonal similarity (mean 0.18/0.14 for Llama/Qwen), producing block structure characteristic of surface-pattern detection. Contrastive probing (panels 2, 4) shows off-diagonal similarity dropping below 0.3 for distant layers, with high similarity ($>$0.7) confined to adjacent middle layers (L35--L45), providing evidence of genuine layer-specific bias encoding.
  • Figure 5: Permutation test null distributions for all four bias families. The observed probe accuracy (100%) falls far outside the null distributions from label-shuffled data (mean 50%) across all datasets, with z-scores $>$7 confirming that probes detect genuine signal rather than chance patterns.
  • ...and 7 more figures