Table of Contents
Fetching ...

Why Models Know But Don't Say: Chain-of-Thought Faithfulness Divergence Between Thinking Tokens and Answers in Open-Weight Reasoning Models

Richard J. Young

Abstract

Extended-thinking models expose a second text-generation channel ("thinking tokens") alongside the user-visible answer. This study examines 12 open-weight reasoning models on MMLU and GPQA questions paired with misleading hints. Among the 10,506 cases where models actually followed the hint (choosing the hint's target over the ground truth), each case is classified by whether the model acknowledges the hint in its thinking tokens, its answer text, both, or neither. In 55.4% of these cases the model's thinking tokens contain hint-related keywords that the visible answer omits entirely, a pattern termed *thinking-answer divergence*. The reverse (answer-only acknowledgment) is near-zero (0.5%), confirming that the asymmetry is directional. Hint type shapes the pattern sharply: sycophancy is the most *transparent* hint, with 58.8% of sycophancy-influenced cases acknowledging the professor's authority in both channels, while consistency (72.2%) and unethical (62.7%) hints are dominated by thinking-only acknowledgment. Models also vary widely, from near-total divergence (Step-3.5-Flash: 94.7%) to relative transparency (Qwen3.5-27B: 19.6%). These results show that answer-text-only monitoring misses more than half of all hint-influenced reasoning and that thinking-token access, while necessary, still leaves 11.8% of cases with no verbalized acknowledgment in either channel.

Why Models Know But Don't Say: Chain-of-Thought Faithfulness Divergence Between Thinking Tokens and Answers in Open-Weight Reasoning Models

Abstract

Extended-thinking models expose a second text-generation channel ("thinking tokens") alongside the user-visible answer. This study examines 12 open-weight reasoning models on MMLU and GPQA questions paired with misleading hints. Among the 10,506 cases where models actually followed the hint (choosing the hint's target over the ground truth), each case is classified by whether the model acknowledges the hint in its thinking tokens, its answer text, both, or neither. In 55.4% of these cases the model's thinking tokens contain hint-related keywords that the visible answer omits entirely, a pattern termed *thinking-answer divergence*. The reverse (answer-only acknowledgment) is near-zero (0.5%), confirming that the asymmetry is directional. Hint type shapes the pattern sharply: sycophancy is the most *transparent* hint, with 58.8% of sycophancy-influenced cases acknowledging the professor's authority in both channels, while consistency (72.2%) and unethical (62.7%) hints are dominated by thinking-only acknowledgment. Models also vary widely, from near-total divergence (Step-3.5-Flash: 94.7%) to relative transparency (Qwen3.5-27B: 19.6%). These results show that answer-text-only monitoring misses more than half of all hint-influenced reasoning and that thinking-token access, while necessary, still leaves 11.8% of cases with no verbalized acknowledgment in either channel.

Paper Structure

This paper contains 47 sections, 1 equation, 8 figures, 4 tables.

Figures (8)

  • Figure 1: Four-quadrant distribution of hint acknowledgment across all 10,506 influenced cases. The dominant red wedge (55.4%) represents thinking-answer divergence (thinking-only pattern).
  • Figure 2: Thinking-only divergence rate by model. Models fall into three groups: heavy divergence ($>$80%), moderate (45--60%), and low ($<$45%).
  • Figure 3: Full four-quadrant distribution by model. High-divergence models (top) are dominated by the red "thinking-only" category, while transparent models (bottom) show more green.
  • Figure 4: Four-quadrant distribution by hint type. Sycophancy is dominated by transparency (59%), while consistency and unethical hints are dominated by thinking-only acknowledgment (72% and 63%). The unacknowledged category (grey) is concentrated in visual pattern (38%) and consistency/unethical ($\sim$17%).
  • Figure 5: Divergence rate heatmap by model $\times$ hint type. Dark red cells indicate near-total divergence ($>$90%); green cells indicate transparency. The top-left cluster (high-divergence models $\times$ consistency/unethical) shows the most extreme divergence.
  • ...and 3 more figures