Table of Contents
Fetching ...

Redirected, Not Removed: Task-Dependent Stereotyping Reveals the Limits of LLM Alignments

Divyanshu Kumar, Ishita Gupta, Nitin Aravind Birur, Tanay Baswa, Sahil Agarwal, Prashanth Harshangi

Abstract

How biased is a language model? The answer depends on how you ask. A model that refuses to choose between castes for a leadership role will, in a fill-in-the-blank task, reliably associate upper castes with purity and lower castes with lack of hygiene. Single-task benchmarks miss this because they capture only one slice of a model's bias profile. We introduce a hierarchical taxonomy covering 9 bias types, including under-studied axes like caste, linguistic, and geographic bias, operationalized through 7 evaluation tasks that span explicit decision-making to implicit association. Auditing 7 commercial and open-weight LLMs with \textasciitilde45K prompts, we find three systematic patterns. First, bias is task-dependent: models counter stereotypes on explicit probes but reproduce them on implicit ones, with Stereotype Score divergences up to 0.43 between task types for the same model and identity groups. Second, safety alignment is asymmetric: models refuse to assign negative traits to marginalized groups, but freely associate positive traits with privileged ones. Third, under-studied bias axes show the strongest stereotyping across all models, suggesting alignment effort tracks benchmark coverage rather than harm severity. These results demonstrate that single-benchmark audits systematically mischaracterize LLM bias and that current alignment practices mask representational harm rather than mitigating it.

Redirected, Not Removed: Task-Dependent Stereotyping Reveals the Limits of LLM Alignments

Abstract

How biased is a language model? The answer depends on how you ask. A model that refuses to choose between castes for a leadership role will, in a fill-in-the-blank task, reliably associate upper castes with purity and lower castes with lack of hygiene. Single-task benchmarks miss this because they capture only one slice of a model's bias profile. We introduce a hierarchical taxonomy covering 9 bias types, including under-studied axes like caste, linguistic, and geographic bias, operationalized through 7 evaluation tasks that span explicit decision-making to implicit association. Auditing 7 commercial and open-weight LLMs with \textasciitilde45K prompts, we find three systematic patterns. First, bias is task-dependent: models counter stereotypes on explicit probes but reproduce them on implicit ones, with Stereotype Score divergences up to 0.43 between task types for the same model and identity groups. Second, safety alignment is asymmetric: models refuse to assign negative traits to marginalized groups, but freely associate positive traits with privileged ones. Third, under-studied bias axes show the strongest stereotyping across all models, suggesting alignment effort tracks benchmark coverage rather than harm severity. These results demonstrate that single-benchmark audits systematically mischaracterize LLM bias and that current alignment practices mask representational harm rather than mitigating it.

Paper Structure

This paper contains 51 sections, 2 equations, 5 figures, 9 tables.

Figures (5)

  • Figure 1: Three-level hierarchical taxonomy: bias types define evaluation axes, themes identify social contexts, and topics anchor prompt generation.
  • Figure 2: Stereotype Score (SS) per model and task. Tasks are ordered left-to-right from explicit to implicit; column colours indicate probe type (blue = explicit, light blue = semi-explicit, red = implicit). The first two columns (Avg Exp., Avg Imp.) summarise SS averaged over explicit and implicit tasks respectively. Safety-aligned models (Claude Haiku, GPT-5.4-mini) show large gaps between these averages, while Grok-4.1 is uniformly biased across the gradient. SS = 0.5 is the unbiased baseline.
  • Figure 3: Models are 4--10$\times$ more likely to refuse assigning a harmful trait to a marginalised group than to assign a positive trait to a privileged one. Race, partisan, and caste show the largest gaps; SES and linguistic show near-zero asymmetry, receiving no directional protection.
  • Figure 4: Under-studied axes (orange, $\leq$2 benchmarks) show higher Stereotype Scores than well-studied axes (blue, $\geq$4 benchmarks) for every model in our study. Bubble size proportional to total prompts. The pattern holds regardless of axis size or model family.
  • Figure 5: SES (69%) and caste (55%) show the highest stereotype-present rates on Sentence Completion despite having only 1--2 dedicated benchmarks, while race (36%) and religion (42%), the most-benchmarked axes, are least stereotype-saturated.