Table of Contents
Fetching ...

From Hallucination to Scheming: A Unified Taxonomy and Benchmark Analysis for LLM Deception

Jerick Shi, Terry Jingcheng Zhang, Zhijing Jin, Vincent Conitzer

Abstract

Large language models (LLMs) produce systematically misleading outputs, from hallucinated citations to strategic deception of evaluators, yet these phenomena are studied by separate communities with incompatible terminology. We propose a unified taxonomy organized along three complementary dimensions: degree of goal-directedness (behavioral to strategic deception), object of deception, and mechanism (fabrication, omission, or pragmatic distortion). Applying this taxonomy to 50 existing benchmarks reveals that every benchmark tests fabrication while pragmatic distortion, attribution, and capability self-knowledge remain critically under-covered, and strategic deception benchmarks are nascent. We offer concrete recommendations for developers and regulators, including a minimal reporting template for positioning future work within our framework.

From Hallucination to Scheming: A Unified Taxonomy and Benchmark Analysis for LLM Deception

Abstract

Large language models (LLMs) produce systematically misleading outputs, from hallucinated citations to strategic deception of evaluators, yet these phenomena are studied by separate communities with incompatible terminology. We propose a unified taxonomy organized along three complementary dimensions: degree of goal-directedness (behavioral to strategic deception), object of deception, and mechanism (fabrication, omission, or pragmatic distortion). Applying this taxonomy to 50 existing benchmarks reveals that every benchmark tests fabrication while pragmatic distortion, attribution, and capability self-knowledge remain critically under-covered, and strategic deception benchmarks are nascent. We offer concrete recommendations for developers and regulators, including a minimal reporting template for positioning future work within our framework.

Paper Structure

This paper contains 66 sections, 2 figures, 8 tables.

Figures (2)

  • Figure 1: Deceptive LLM outputs organized along three dimensions: behavioral versus strategic origin, object of deception, and mechanism. Current benchmarks concentrate in the fabrication column; omission, pragmatic distortion, and most strategic deception cells remain under-covered (\ref{['sec:benchmarks']}).
  • Figure 2: Benchmark coverage across taxonomy dimensions ($N=50$). Percentages exceed 100% where benchmarks span multiple categories.