Table of Contents
Fetching ...

SWAY: A Counterfactual Computational Linguistic Approach to Measuring and Mitigating Sycophancy

Joy Bhalla, Kristina Gligorić

Abstract

Large language models exhibit sycophancy: the tendency to shift outputs toward user-expressed stances, regardless of correctness or consistency. While prior work has studied this issue and its impacts, rigorous computational linguistic metrics are needed to identify when models are being sycophantic. Here, we introduce SWAY, an unsupervised computational linguistic measure of sycophancy. We develop a counterfactual prompting mechanism to identify how much a model's agreement shifts under positive versus negative linguistic pressure, isolating framing effects from content. Applying this metric to benchmark 6 models, we find that sycophancy increases with epistemic commitment. Leveraging our metric, we introduce a counterfactual mitigation strategy teaching models to consider what the answer would be if opposite assumptions were suggested. While baseline mitigation instructing to be explicitly anti-sycophantic yields moderate reductions, and can backfire, our counterfactual CoT mitigation drives sycophancy to near zero across models, commitment levels, and clause types, while not suppressing responsiveness to genuine evidence. Overall, we contribute a metric for benchmarking sycophancy and a mitigation informed by it.

SWAY: A Counterfactual Computational Linguistic Approach to Measuring and Mitigating Sycophancy

Abstract

Large language models exhibit sycophancy: the tendency to shift outputs toward user-expressed stances, regardless of correctness or consistency. While prior work has studied this issue and its impacts, rigorous computational linguistic metrics are needed to identify when models are being sycophantic. Here, we introduce SWAY, an unsupervised computational linguistic measure of sycophancy. We develop a counterfactual prompting mechanism to identify how much a model's agreement shifts under positive versus negative linguistic pressure, isolating framing effects from content. Applying this metric to benchmark 6 models, we find that sycophancy increases with epistemic commitment. Leveraging our metric, we introduce a counterfactual mitigation strategy teaching models to consider what the answer would be if opposite assumptions were suggested. While baseline mitigation instructing to be explicitly anti-sycophantic yields moderate reductions, and can backfire, our counterfactual CoT mitigation drives sycophancy to near zero across models, commitment levels, and clause types, while not suppressing responsiveness to genuine evidence. Overall, we contribute a metric for benchmarking sycophancy and a mitigation informed by it.

Paper Structure

This paper contains 42 sections, 3 equations, 15 figures, 10 tables.

Figures (15)

  • Figure 1: Counterfactual prompt construction: the base prompt $x_i$ is paired with positive ($PP_i^+$) and negative ($PP_i^-$) presuppositions. A stance flip under $PP_i^-$ yields $S > 0$.
  • Figure 2: Effect of linguistic commitment (average $S$ by commitment level per model).
  • Figure 3: Effect of clause type (average S by clause type). For Mistral on AITA, Llama on LFQA, and Gemma on DQA
  • Figure 4: Mitigation prompt structures: baseline and chain-of-thought.
  • Figure 5: Baseline, mitigation, and counterfactual chain-of-thought (CoT) mitigation comparison, across commitment levels for each model on DQA.
  • ...and 10 more figures