Table of Contents
Fetching ...

Views on AI Existential Risk Before and After a Public Event at Harvard University

Greg Kestin, Nate Soares

Abstract

We report the results of identical pre- and post-event surveys given to attendees of a talk, two-sided conversation, and Q&A centered around the book If Anyone Builds It, Everyone Dies at Harvard University in March 2026, covering perceived probability of AI-caused extinction or severe disempowerment resulting from unimpeded AI development, confidence in those estimates, and global priority. Among the 89 matched participants, the post-event median estimate of the probability of existential risk from advanced AI was 70%, and 96% agreed that mitigating AI existential risk should be a global priority. Although these self-selected respondents' pre-event views were already high (50% and 93%, respectively) relative to results of similar surveys that were previously administered to experts and the general public, the event produced increases on all measures when considering the respondents in aggregate. The magnitudes of increases in risk probability were negatively correlated with prior familiarity with the topic: among attendees with little prior familiarity, 60% shifted upward and none shifted downward, whereas among self-described experts, no respondents shifted upward and 20% shifted downward. Self-reported confidence also increased significantly, and confidence shifts were positively correlated with probability shifts. These findings indicate that a structured public engagement event can meaningfully shift risk perceptions, particularly among newcomers to the topic.

Views on AI Existential Risk Before and After a Public Event at Harvard University

Abstract

We report the results of identical pre- and post-event surveys given to attendees of a talk, two-sided conversation, and Q&A centered around the book If Anyone Builds It, Everyone Dies at Harvard University in March 2026, covering perceived probability of AI-caused extinction or severe disempowerment resulting from unimpeded AI development, confidence in those estimates, and global priority. Among the 89 matched participants, the post-event median estimate of the probability of existential risk from advanced AI was 70%, and 96% agreed that mitigating AI existential risk should be a global priority. Although these self-selected respondents' pre-event views were already high (50% and 93%, respectively) relative to results of similar surveys that were previously administered to experts and the general public, the event produced increases on all measures when considering the respondents in aggregate. The magnitudes of increases in risk probability were negatively correlated with prior familiarity with the topic: among attendees with little prior familiarity, 60% shifted upward and none shifted downward, whereas among self-described experts, no respondents shifted upward and 20% shifted downward. Self-reported confidence also increased significantly, and confidence shifts were positively correlated with probability shifts. These findings indicate that a structured public engagement event can meaningfully shift risk perceptions, particularly among newcomers to the topic.

Paper Structure

This paper contains 17 sections, 2 figures, 2 tables.

Figures (2)

  • Figure 1: Changes in self-reported probability of existential risk from advanced AI developed unimpeded, before and after the event, stratified by prior familiarity. Each panel shows kernel density estimates of pre-event (teal, dashed outline) and post-event (purple/pink, solid) distributions for one familiarity group. Arrows indicate the direction of net shift, with percentages showing the fraction of participants whose estimates increased or decreased. The "Nothing at all" exposure group is omitted since there is only one member in that group. The gradient from "little familiarity" to "expert" reveals a monotonic decrease in upward shifts and the emergence of downward shifts.
  • Figure 2: Direction and magnitude of individual pre--post belief shifts across all three survey questions, stratified by self-reported prior exposure to the topic of AI existential risk. Each bar shows the proportion of matched participants whose responses shifted down (orange, left), remained unchanged (gray, center), or shifted up (blue, right), with counts shown inside each segment. Right-hand labels indicate the net mean shift in percentage points (Q1) or scale points on a five-point scale (Q2, Q3). The "Nothing at all" exposure group is omitted since there is only one member in that group. A gradient of decreasing upward shifts with increasing prior exposure is visible across all three questions.