Table of Contents
Fetching ...

The Hidden Costs of AI-Mediated Political Outreach: Persuasion and AI Penalties in the US and UK

Andreas Jungherr, Adrian Rauchfleisch

Abstract

As AI-enabled systems become available for political campaign outreach, an important question has received little empirical attention: how do people evaluate the communicative practices these systems represent, and what consequences do those evaluations carry? Most research on AI-enabled persuasion examines attitude change under enforced exposure, leaving aside whether people regard AI-mediated outreach as legitimate or not. We address this gap with a preregistered 2x2 experiment conducted in the United States and United Kingdom (N = 1,800 per country) varying outreach intent (informational vs.~persuasive) and type of interaction partner (human vs.~AI-mediated) in the context of political issues that respondents consider highly important. We find consistent evidence for two evaluation penalties. A persuasion penalty emerges across nearly all outcomes in both countries: explicitly persuasive outreach is evaluated as less acceptable, more threatening to personal autonomy, less beneficial, and more damaging to organizational trust than informational outreach, consistent with reactance to perceived threats to attitudinal freedom. An AI penalty is consistent with a distinct mechanism: AI-mediated outreach triggers normative concerns about appropriate communicative agents, producing similarly negative evaluations across five outcomes in both countries. As automated outreach becomes more widespread, how people judge it may matter for democratic communication just as much as whether it changes minds.

The Hidden Costs of AI-Mediated Political Outreach: Persuasion and AI Penalties in the US and UK

Abstract

As AI-enabled systems become available for political campaign outreach, an important question has received little empirical attention: how do people evaluate the communicative practices these systems represent, and what consequences do those evaluations carry? Most research on AI-enabled persuasion examines attitude change under enforced exposure, leaving aside whether people regard AI-mediated outreach as legitimate or not. We address this gap with a preregistered 2x2 experiment conducted in the United States and United Kingdom (N = 1,800 per country) varying outreach intent (informational vs.~persuasive) and type of interaction partner (human vs.~AI-mediated) in the context of political issues that respondents consider highly important. We find consistent evidence for two evaluation penalties. A persuasion penalty emerges across nearly all outcomes in both countries: explicitly persuasive outreach is evaluated as less acceptable, more threatening to personal autonomy, less beneficial, and more damaging to organizational trust than informational outreach, consistent with reactance to perceived threats to attitudinal freedom. An AI penalty is consistent with a distinct mechanism: AI-mediated outreach triggers normative concerns about appropriate communicative agents, producing similarly negative evaluations across five outcomes in both countries. As automated outreach becomes more widespread, how people judge it may matter for democratic communication just as much as whether it changes minds.

Paper Structure

This paper contains 49 sections, 8 figures, 60 tables.

Figures (8)

  • Figure 1: Estimated effects of persuasive outreach (H1, orange) and AI-mediated outreach (H2, blue) on six outcome variables, for the US (left) and the UK (right). Points are OLS regression coefficients with 95% confidence intervals. The coefficients represent the mean difference between the two levels of each factor, averaged over the other factor. Non-significant estimates are transparent.
  • Figure 2: Estimated marginal means for four outcomes with significant interaction effects in the United Kingdom, by outreach intent (informational vs. persuasive) and outreach mode (human vs. AI). Error bars represent 95% confidence intervals.
  • Figure 3: Predicted means for future campaign avoidance (top) and penalty for source (bottom) as a function of AI risk perception, separately for human-mediated (dashed) and AI-mediated (solid) outreach, in the United States (left) and the United Kingdom (right). Shaded bands represent 95% confidence intervals. All four interactions are statistically significant
  • Figure 4: Distribution of issue categories in the US and the UK.
  • Figure 5: Percentage-point difference (UK--US); positive values indicate that an issue was more common in the UK.
  • ...and 3 more figures