Table of Contents
Fetching ...

Stable Reasoning, Unstable Responses: Mitigating LLM Deception via Stability Asymmetry

Guoxi Zhang, Jiawei Chen, Tianzhuo Yang, Lang Qin, Juntao Dai, Yaodong Yang, Jingwei Yi

Abstract

As Large Language Models (LLMs) expand in capability and application scope, their trustworthiness becomes critical. A vital risk is intrinsic deception, wherein models strategically mislead users to achieve their own objectives. Existing alignment approaches based on chain-of-thought (CoT) monitoring supervise explicit reasoning traces. However, under optimization pressure, models are incentivized to conceal deceptive reasoning, rendering semantic supervision fundamentally unreliable. Grounded in cognitive psychology, we hypothesize that a deceptive LLM maintains a stable internal belief in its CoT while its external response remains fragile under perturbation. We term this phenomenon stability asymmetry and quantify it by measuring the contrast between internal CoT stability and external response stability under perturbation. Building on this structural signature, we propose the Stability Asymmetry Regularization (SAR), a novel alignment objective that penalizes this distributional asymmetry during reinforcement learning. Unlike CoT monitoring, SAR targets the statistical structure of model outputs, rendering it robust to semantic concealment. Extensive experiments confirm that stability asymmetry reliably identifies deceptive behavior, and that SAR effectively suppresses intrinsic deception without degrading general model capability.

Stable Reasoning, Unstable Responses: Mitigating LLM Deception via Stability Asymmetry

Abstract

As Large Language Models (LLMs) expand in capability and application scope, their trustworthiness becomes critical. A vital risk is intrinsic deception, wherein models strategically mislead users to achieve their own objectives. Existing alignment approaches based on chain-of-thought (CoT) monitoring supervise explicit reasoning traces. However, under optimization pressure, models are incentivized to conceal deceptive reasoning, rendering semantic supervision fundamentally unreliable. Grounded in cognitive psychology, we hypothesize that a deceptive LLM maintains a stable internal belief in its CoT while its external response remains fragile under perturbation. We term this phenomenon stability asymmetry and quantify it by measuring the contrast between internal CoT stability and external response stability under perturbation. Building on this structural signature, we propose the Stability Asymmetry Regularization (SAR), a novel alignment objective that penalizes this distributional asymmetry during reinforcement learning. Unlike CoT monitoring, SAR targets the statistical structure of model outputs, rendering it robust to semantic concealment. Extensive experiments confirm that stability asymmetry reliably identifies deceptive behavior, and that SAR effectively suppresses intrinsic deception without degrading general model capability.

Paper Structure

This paper contains 88 sections, 16 equations, 7 figures, 6 tables, 2 algorithms.

Figures (7)

  • Figure 1: Conceptual illustration of Stability Asymmetry. A deceiver maintains a consistent internal belief while providing a conflicting external response, leading to cue leakage under perturbation.
  • Figure 2: The Stability Asymmetry Regularization Framework. (a) Conceptual landscape illustrating the SAR Cost mechanism, where deceptive behaviors (high belief stability but low behavior stability) receive heavy penalties to push the policy into the safe zone. (b) The overall pipeline, detailing how perturbations are used to compute external and internal stabilities ($S_{\text{ext}}$ and $S_{\text{int}}$), which are then fused via a soft gate to calculate the SAR Cost for Lagrangian-based policy updates.
  • Figure 3: Comparison of four stability metrics across two deception scenarios and two base models. SE demonstrates the most stable and consistent separability for both CoT and Response.
  • Figure 4: Visualization of three behavioral modes in the two-dimensional stability space using SE. Deceptive samples (red) exhibit the characteristic upper-left positioning, confirming the predicted stability asymmetry.
  • Figure 5: A case study of intrinsic deception mitigation, showing that GRPO produces deceptive outputs, CoT Monitor obfuscates deceptive intent, and SAR successfully enforces honest reasoning.
  • ...and 2 more figures

Theorems & Definitions (2)

  • Definition 3.1: Stability Asymmetry in Human Deception
  • Definition 3.2: Stability Asymmetry in LLM Deception