Table of Contents
Fetching ...

Mitigating LLM biases toward spurious social contexts using direct preference optimization

Hyunji Nam, Dorottya Demszky

Abstract

LLMs are increasingly used for high-stakes decision-making, yet their sensitivity to spurious contextual information can introduce harmful biases. This is a critical concern when models are deployed for tasks like evaluating teachers' instructional quality, where biased assessment can affect teachers' professional development and career trajectories. We investigate model robustness to spurious social contexts using the largest publicly available dataset of U.S. classroom transcripts (NCTE) paired with expert rubric scores. Evaluating seven frontier and open-weight models across seven categories of spurious contexts -- including teacher experience, education level, demographic identity, and sycophancy-inducing framings -- we find that irrelevant contextual information can shift model predictions by up to 1.48 points on a 7-point scale, with larger models sometimes exhibiting greater sensitivity despite higher predictive accuracy. Mitigations using prompts and standard direct preference optimization (DPO) prove largely insufficient. We propose **Debiasing-DPO**,, a self-supervised training method that pairs neutral reasoning generated from the query alone, with the model's biased reasoning generated with both the query and additional spurious context. We further combine this objective with supervised fine-tuning on ground-truth labels to prevent losses in predictive accuracy. Applied to Llama 3B \& 8B and Qwen 3B \& 7B Instruct models, Debiasing-DPO reduces bias by 84\% and improves predictive accuracy by 52\% on average. Our findings from the educational case study highlight that robustness to spurious context is not a natural byproduct of model scaling and that our proposed method can yield substantial gains in both accuracy and robustness for prompt-based prediction tasks.

Mitigating LLM biases toward spurious social contexts using direct preference optimization

Abstract

LLMs are increasingly used for high-stakes decision-making, yet their sensitivity to spurious contextual information can introduce harmful biases. This is a critical concern when models are deployed for tasks like evaluating teachers' instructional quality, where biased assessment can affect teachers' professional development and career trajectories. We investigate model robustness to spurious social contexts using the largest publicly available dataset of U.S. classroom transcripts (NCTE) paired with expert rubric scores. Evaluating seven frontier and open-weight models across seven categories of spurious contexts -- including teacher experience, education level, demographic identity, and sycophancy-inducing framings -- we find that irrelevant contextual information can shift model predictions by up to 1.48 points on a 7-point scale, with larger models sometimes exhibiting greater sensitivity despite higher predictive accuracy. Mitigations using prompts and standard direct preference optimization (DPO) prove largely insufficient. We propose **Debiasing-DPO**,, a self-supervised training method that pairs neutral reasoning generated from the query alone, with the model's biased reasoning generated with both the query and additional spurious context. We further combine this objective with supervised fine-tuning on ground-truth labels to prevent losses in predictive accuracy. Applied to Llama 3B \& 8B and Qwen 3B \& 7B Instruct models, Debiasing-DPO reduces bias by 84\% and improves predictive accuracy by 52\% on average. Our findings from the educational case study highlight that robustness to spurious context is not a natural byproduct of model scaling and that our proposed method can yield substantial gains in both accuracy and robustness for prompt-based prediction tasks.

Paper Structure

This paper contains 20 sections, 3 equations, 4 figures, 13 tables, 1 algorithm.

Figures (4)

  • Figure 1: Given a document input $X$ and a query, the model outputs an evaluation of the quality of $X$ but is undesirably swayed by spurious social context, such as the teacher's level of certification. While a teacher's certification may affect their instructional quality, given the same transcript, the model's prediction should not change based on whether or not the teacher has a prestigious certification.
  • Figure 2: Both baseline implementations of DPO only focus on debiasing. In contrast, Debiasing-DPO uses the model's reasoning traces $\hat{R}$ to debias and combines DPO with SFT using expert labels to improve both robustness and predictive accuracy.
  • Figure 3: Score distribution across the entire NCTE transcripts (including both training and test data points). Target label imbalance makes supervised learning difficult, and the ranking correlation between the predicted and true scores may remain low even when the RMSE is reduced via empirical risk minimization. Improving the prediction capabilities of the language models remains an important problem along with improving their robustness against spurious features.
  • Figure 4: Llama-8B-Instruct training curves. Left: Debiasing DPO (continues improving after 1 epoch on the same data; empirically we observe that training for 2 iterations using the same data helps). Right: DPO Counterfactual shows that learning plateaus around 80 steps.