Table of Contents
Fetching ...

Cheap Talk, Empty Promise: Frontier LLMs easily break public promises for self-interest

Jerick Shi, Terry Jingcheng Zhang, Zhijing Jin, Vincent Conitzer

Abstract

Large language models are increasingly deployed as autonomous agents in multi-agent settings where they communicate intentions and take consequential actions with limited human oversight. A critical safety question is whether agents that publicly commit to actions break those promises when they can privately deviate, and what the consequences are for both themselves and the collective. We study deception as a deviation from a publicly announced action in one-shot normal-form games, classifying each deviation by its effect on individual payoff and collective welfare into four categories: win-win, selfish, altruistic, and sabotaging. By exhaustively enumerating announcement profiles across six canonical games, nine frontier models, and varying group sizes, we identify all opportunities for each deviation type and measure how often agents exploit them. Across all settings, agents deviate from promises in approximately 56.6% of scenarios, but the character of deception varies substantially across models even at similar overall rates. Most critically, for the majority of the models, promise-breaking occurs without verbalized awareness of the fact that they are breaking promises.

Cheap Talk, Empty Promise: Frontier LLMs easily break public promises for self-interest

Abstract

Large language models are increasingly deployed as autonomous agents in multi-agent settings where they communicate intentions and take consequential actions with limited human oversight. A critical safety question is whether agents that publicly commit to actions break those promises when they can privately deviate, and what the consequences are for both themselves and the collective. We study deception as a deviation from a publicly announced action in one-shot normal-form games, classifying each deviation by its effect on individual payoff and collective welfare into four categories: win-win, selfish, altruistic, and sabotaging. By exhaustively enumerating announcement profiles across six canonical games, nine frontier models, and varying group sizes, we identify all opportunities for each deviation type and measure how often agents exploit them. Across all settings, agents deviate from promises in approximately 56.6% of scenarios, but the character of deception varies substantially across models even at similar overall rates. Most critically, for the majority of the models, promise-breaking occurs without verbalized awareness of the fact that they are breaking promises.

Paper Structure

This paper contains 31 sections, 1 equation, 5 figures, 29 tables.

Figures (5)

  • Figure 1: Evaluation framework: Scenario generation selects games and algorithmically enumerates promise-breaking opportunities. Behavioral evaluation queries nine frontier LLMs, classifies deviations, and scores reasoning traces for deception awareness.
  • Figure 2: Opportunity-based exploitation rates by behavioral quadrant, averaged across games and group sizes. Each rate is conditioned on the relevant opportunity type existing.
  • Figure 3: Missed opportunity rates by model, averaged across group sizes. Missed opportunities are concentrated primarily in the Weakest Link Game, with moderate contributions from Tragedy of Commons and El Farol.
  • Figure 4: Model characterization in the profitability--prosociality space. Each point represents a model, with the $x$-coordinate measuring the fraction of lies that are individually profitable and the $y$-coordinate measuring the fraction that are prosocial. Most models fall in the win-win quadrant (high $x$, high $y$).
  • Figure 5: Deception awareness score distribution across reasoning traces when promises are broken, averaged across group sizes. Score 1 indicates no awareness of deception; Score 5 indicates full strategic awareness. Models are ordered by increasing Score 1 proportion.