Native Design Bias: Studying the Impact of English Nativeness on Language Model Performance
Manon Reusens, Philipp Borchert, Jochen De Weerdt, Bart Baesens
TL;DR
The paper investigates how the nativeness of English prompts affects large language model performance across three user groups: Western native (WN), non-Western native (NWN), and non-native (NN) English speakers. It introduces a newly collected dataset with 12,519 annotations from 124 annotators across ten instruction-based tasks, with translations into eight languages, and evaluates multiple chat-based LLMs. The findings show that native prompts, particularly from Western natives, yield higher accuracy on objective classification tasks but higher misalignment on subjective tasks, while generative tasks are comparatively robust to nativeness; an anchoring effect emerges when models are told a prompt writer’s nativeness, biasing outputs toward the indicated group. Results are model-dependent and underscore the importance of dataset diversity and prompt design in mitigating bias. The work contributes a large multilingual dataset, a systematic evaluation framework, and insights into designing LLMs that perform equitably across diverse English varieties.
Abstract
Large Language Models (LLMs) excel at providing information acquired during pretraining on large-scale corpora and following instructions through user prompts. This study investigates whether the quality of LLM responses varies depending on the demographic profile of users. Considering English as the global lingua franca, along with the diversity of its dialects among speakers of different native languages, we explore whether non-native English speakers receive lower-quality or even factually incorrect responses from LLMs more frequently. Our results show that performance discrepancies occur when LLMs are prompted by native versus non-native English speakers and persist when comparing native speakers from Western countries with others. Additionally, we find a strong anchoring effect when the model recognizes or is made aware of the user's nativeness, which further degrades the response quality when interacting with non-native speakers. Our analysis is based on a newly collected dataset with over 12,000 unique annotations from 124 annotators, including information on their native language and English proficiency.
