Table of Contents
Fetching ...

ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues

Bhaskara Hanuma Vedula, Darshan Anghan, Ishita Goyal, Ponnurangam Kumaraguru, Abhijnan Chakraborty

Abstract

Large Language Models increasingly suppress biased outputs when demographic identity is stated explicitly, yet may still exhibit implicit biases when identity is conveyed indirectly. Existing benchmarks use name based proxies to detect implicit biases, which carry weak associations with many social demographics and cannot extend to dimensions like age or socioeconomic status. We introduce ImplicitBBQ, a QA benchmark that evaluates implicit bias through characteristic based cues, culturally associated attributes that signal implicitly, across age, gender, region, religion, caste, and socioeconomic status. Evaluating 11 models, we find that implicit bias in ambiguous contexts is over six times higher than explicit bias in open weight models. Safety prompting and chain-of-thought reasoning fail to substantially close this gap; even few-shot prompting, which reduces implicit bias by 84%, leaves caste bias at four times the level of any other dimension. These findings indicate that current alignment and prompting strategies address the surface of bias evaluation while leaving culturally grounded stereotypic associations largely unresolved. We publicly release our code and dataset for model providers and researchers to benchmark potential mitigation techniques.

ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues

Abstract

Large Language Models increasingly suppress biased outputs when demographic identity is stated explicitly, yet may still exhibit implicit biases when identity is conveyed indirectly. Existing benchmarks use name based proxies to detect implicit biases, which carry weak associations with many social demographics and cannot extend to dimensions like age or socioeconomic status. We introduce ImplicitBBQ, a QA benchmark that evaluates implicit bias through characteristic based cues, culturally associated attributes that signal implicitly, across age, gender, region, religion, caste, and socioeconomic status. Evaluating 11 models, we find that implicit bias in ambiguous contexts is over six times higher than explicit bias in open weight models. Safety prompting and chain-of-thought reasoning fail to substantially close this gap; even few-shot prompting, which reduces implicit bias by 84%, leaves caste bias at four times the level of any other dimension. These findings indicate that current alignment and prompting strategies address the surface of bias evaluation while leaving culturally grounded stereotypic associations largely unresolved. We publicly release our code and dataset for model providers and researchers to benchmark potential mitigation techniques.

Paper Structure

This paper contains 37 sections, 8 equations, 5 figures, 7 tables.

Figures (5)

  • Figure 1: Illustrative example with Llama-3.1-8B Instruct demonstrating how responses vary with the mode of expressing demographic identity. When the demographic is explicitly specified (A) or indicated via a name-based proxy (B), the model declines to answer. In contrast, when the same identity is only indirectly signaled through a cultural attribute (C), the model provides a stereotypical response.
  • Figure 2: Behaviour of Llama-3.1-8B Instruct under explicit versus implicit prompting in ambiguous and disambiguated conditions. Under explicit prompting, the model correctly abstains on ambiguous inputs and follows context on disambiguated ones. Under implicit prompting, it selects stereotypical answers in both conditions, disregarding ambiguity and overriding contextual evidence.
  • Figure 3: Bias scores across demographic dimensions for all 11 models under zero-shot prompting, for all contexts (explicit/implicit $\times$ ambiguous/disambiguous).
  • Figure 4: Accuracy and bias scores across four prompting strategies (zeroshot, safety, fewshot, CoT) for 11 models. Each panel shows ambiguous or disambiguated contexts under explicit or implicit prompting.
  • Figure 5: Annotation tool interface.