Table of Contents
Fetching ...

Why That Robot? A Qualitative Analysis of Justification Strategies for Robot Color Selection Across Occupational Contexts

Jiangen He, Wanqi Zhang, Jessica K. Barfield

Abstract

As robots increasingly enter the workforce, human-robot interaction (HRI) must address how implicit social biases influence user preferences. This paper investigates how users rationalize their selections of robots varying in skin tone and anthropomorphic features across different occupations. By qualitatively analyzing 4,146 open-ended justifications from 1,038 participants, we map the reasoning frameworks driving robot color selection across four professional contexts. We developed and validated a comprehensive, multidimensional coding scheme via human--AI consensus ($κ= 0.73$). Our results demonstrate that while utilitarian \textit{Functionalism} is the dominant justification strategy (52\%), participants systematically adapted these practical rationales to align with established racial and occupational stereotypes. Furthermore, we reveal that bias frequently operates beneath conscious rationalization: exposure to racial stereotype primes significantly shifted participants' color choices, yet their spoken justifications remained masked by standard affective or task-related reasoning. We also found that demographic backgrounds significantly shape justification strategies, and that robot shape strongly modulates color interpretation. Specifically, as robots become highly anthropomorphic, users increasingly retreat from functional reasoning toward \textit{Machine-Centric} de-racialization. Through these empirical results, we provide actionable design implications to help reduce the perpetuation of societal biases in future workforce robots.

Why That Robot? A Qualitative Analysis of Justification Strategies for Robot Color Selection Across Occupational Contexts

Abstract

As robots increasingly enter the workforce, human-robot interaction (HRI) must address how implicit social biases influence user preferences. This paper investigates how users rationalize their selections of robots varying in skin tone and anthropomorphic features across different occupations. By qualitatively analyzing 4,146 open-ended justifications from 1,038 participants, we map the reasoning frameworks driving robot color selection across four professional contexts. We developed and validated a comprehensive, multidimensional coding scheme via human--AI consensus (). Our results demonstrate that while utilitarian \textit{Functionalism} is the dominant justification strategy (52\%), participants systematically adapted these practical rationales to align with established racial and occupational stereotypes. Furthermore, we reveal that bias frequently operates beneath conscious rationalization: exposure to racial stereotype primes significantly shifted participants' color choices, yet their spoken justifications remained masked by standard affective or task-related reasoning. We also found that demographic backgrounds significantly shape justification strategies, and that robot shape strongly modulates color interpretation. Specifically, as robots become highly anthropomorphic, users increasingly retreat from functional reasoning toward \textit{Machine-Centric} de-racialization. Through these empirical results, we provide actionable design implications to help reduce the perpetuation of societal biases in future workforce robots.

Paper Structure

This paper contains 30 sections, 7 figures, 2 tables.

Figures (7)

  • Figure 3: Frequency distribution of the fifteen sub-categories.
  • Figure 4: Proportion of justification categories across the four task contexts.
  • Figure 5: Sub-category proportions (%) within each task context.
  • Figure 6: Proportion of justification categories by robot color choice.
  • Figure 7: Interaction between priming condition, justification category, and prime-aligned robot selection. The Top Row displays the overall proportion of each justification category under Stereotype and Non-Stereotype priming conditions. Each bar is subdivided: the darker (bottom/base) segment represents choices where the selected robot color aligned with the racial prime (e.g., selecting a Dark robot after a Black prime), while the lighter (top) segment represents all other color selections. The Bottom Row isolates only the prime-aligned selections, showing the darker segments from the top row. Categories: C1: Functionalism, C2: Psych/Affective, C3: Machine-Centric, C4: Preference/Evasion, C5: Identity/Social.
  • ...and 2 more figures