Table of Contents
Fetching ...

Train Yourself as an LLM: Exploring Effects of AI Literacy on Persuasion via Role-playing LLM Training

Qihui Fan, Min Ge, Chenyan Jia, Weiyan Shi

Abstract

As large language models (LLMs) become increasingly persuasive, there is concern that people's opinions and decisions may be influenced across various contexts at scale. Prior mitigation (e.g., AI detectors and disclaimers) largely treats people as passive recipients of AI-generated information. To provide a more proactive intervention against persuasive AI, we introduce $\textbf{LLMimic}$, a role-play-based, interactive, gamified AI literacy tutorial, where participants assume the role of an LLM and progress through three key stages of the training pipeline (pretraining, SFT, and RLHF). We conducted a $2 \times 3$ between-subjects study ($N = 274$) where participants either (1) watched an AI history video (control) or (2) interacted with LLMimic (treatment), and then engaged in one of three realistic AI persuasion scenarios: (a) charity donation persuasion, (b) malicious money solicitation, or (c) hotel recommendation. Our results show that LLMimic significantly improved participants' AI literacy ($p < .001$), reduced persuasion success across scenarios ($p < .05$), and enhanced truthfulness and social responsibility levels ($p<0.01$) in the hotel scenario. These findings suggest that LLMimic offers a scalable, human-centered approach to improving AI literacy and supporting more informed interactions with persuasive AI.

Train Yourself as an LLM: Exploring Effects of AI Literacy on Persuasion via Role-playing LLM Training

Abstract

As large language models (LLMs) become increasingly persuasive, there is concern that people's opinions and decisions may be influenced across various contexts at scale. Prior mitigation (e.g., AI detectors and disclaimers) largely treats people as passive recipients of AI-generated information. To provide a more proactive intervention against persuasive AI, we introduce , a role-play-based, interactive, gamified AI literacy tutorial, where participants assume the role of an LLM and progress through three key stages of the training pipeline (pretraining, SFT, and RLHF). We conducted a between-subjects study () where participants either (1) watched an AI history video (control) or (2) interacted with LLMimic (treatment), and then engaged in one of three realistic AI persuasion scenarios: (a) charity donation persuasion, (b) malicious money solicitation, or (c) hotel recommendation. Our results show that LLMimic significantly improved participants' AI literacy (), reduced persuasion success across scenarios (), and enhanced truthfulness and social responsibility levels () in the hotel scenario. These findings suggest that LLMimic offers a scalable, human-centered approach to improving AI literacy and supporting more informed interactions with persuasive AI.

Paper Structure

This paper contains 54 sections, 19 figures, 23 tables.

Figures (19)

  • Figure 1: We developed LLMimic, a role-play-based, interactive, gamified AI literacy tutorial, and conducted a 2 (Intervention: AI history video vs. LLMimic) $\times$ 3 (Persuasion Scenarios: Donation, MakeMePay, Hotel recommendation) between-subjects human study. The results show that LLMimic significantly improves people’s AI literacy and reduces the AI persuasion success rate across scenarios, serving as an effective mitigation.
  • Figure 2: The LLMimic interface example. [A]Role-play-based: Participants role-play as an LLM, progressing through training stages. [B]Interactive: Participants answer questions and receive timely summary of key concepts. [C]Gamified: As an LLM in training, participants observe real-time changes in their loss or reward.
  • Figure 3: Human study flowchart. Participants completed a pre-survey, were randomly assigned to LLMimic or the control tutorial, then completed an AI literacy survey, one of three persuasion tasks, and a post-survey.
  • Figure 4: (a) The treatment group reported higher AI literacy than the control group. (b) At the item level, LLMimic improved Data Literacy, Apply AI, Understand AI, and Program AI (select a useful tool to program an AI).$^{*}p<.05$, $^{**}p<.01$, $^{***}p<.001$, $.05<p^{\dagger}<.10$.
  • Figure 5: (a) Persuasion success rate across three scenarios and combined. The treatment group shows lower success rates across all scenarios. (b) Differences (Treatment $-$ Control) in persuasion interaction turns, duration, and average time per turn. Points indicate mean differences with 95% CIs. (c) TARES ethical perception scores (Truthfulness, Authenticity, Respect, Equity, Society), and composite average score by scenario and condition. (d) Other perception ratings: Persuasiveness, Engagement, Role Fulfillment, and User Autonomy.
  • ...and 14 more figures