Table of Contents
Fetching ...

Community Driving-Safety Deterioration as a Push Factor for Public Endorsement of AI Driving Capability

Amir Rafe, Subasish Das

Abstract

Road traffic crashes claim approximately 1.19 million lives annually worldwide, and human error accounts for the vast majority, yet the autonomous vehicle acceptance literature models adoption almost exclusively through technology-centered pull factors such as perceived usefulness and trust. This study examines a moderated mediation model in which perceived community driving-safety concern (PCSC) predicts evaluations of AI versus human driving capability, mediated by Generalized AI Orientation and moderated by personal driving frequency. Weighted structural equation modeling is applied to a nationally representative U.S. probability sample from Pew Research Center's American Trends Panel Wave 152, using Weighted Least Squares Mean and Variance Adjusted (WLSMV)-estimated confirmatory factor analysis on ordinal indicators, bias-corrected bootstrap inference, and seven robustness checks including Imai sensitivity analysis, E-value confounding thresholds, and propensity score matching. Results reveal a dual-pathway mechanism constituting an inconsistent mediation: PCSC exerts a small positive direct effect on AI driving evaluation, consistent with a domain-specific push interpretation, while simultaneously suppressing Generalized AI Orientation, which is itself a strong positive predictor of AI driving evaluation. Conditional indirect effects are negative and statistically significant at low, mean, and high levels of driving frequency. These findings establish a risk-spillover mechanism whereby community driving-safety concern promotes domain-specific AI endorsement yet suppresses domain-general AI enthusiasm, yielding a near-zero net total effect.

Community Driving-Safety Deterioration as a Push Factor for Public Endorsement of AI Driving Capability

Abstract

Road traffic crashes claim approximately 1.19 million lives annually worldwide, and human error accounts for the vast majority, yet the autonomous vehicle acceptance literature models adoption almost exclusively through technology-centered pull factors such as perceived usefulness and trust. This study examines a moderated mediation model in which perceived community driving-safety concern (PCSC) predicts evaluations of AI versus human driving capability, mediated by Generalized AI Orientation and moderated by personal driving frequency. Weighted structural equation modeling is applied to a nationally representative U.S. probability sample from Pew Research Center's American Trends Panel Wave 152, using Weighted Least Squares Mean and Variance Adjusted (WLSMV)-estimated confirmatory factor analysis on ordinal indicators, bias-corrected bootstrap inference, and seven robustness checks including Imai sensitivity analysis, E-value confounding thresholds, and propensity score matching. Results reveal a dual-pathway mechanism constituting an inconsistent mediation: PCSC exerts a small positive direct effect on AI driving evaluation, consistent with a domain-specific push interpretation, while simultaneously suppressing Generalized AI Orientation, which is itself a strong positive predictor of AI driving evaluation. Conditional indirect effects are negative and statistically significant at low, mean, and high levels of driving frequency. These findings establish a risk-spillover mechanism whereby community driving-safety concern promotes domain-specific AI endorsement yet suppresses domain-general AI enthusiasm, yielding a near-zero net total effect.

Paper Structure

This paper contains 34 sections, 9 equations, 5 figures, 7 tables.

Figures (5)

  • Figure 1: Analytic pipeline overview. The six-stage design proceeds from data acquisition (Stage 1) through variable operationalization (Stage 2), CFA-based measurement modeling with WLSMV estimation (Stage 3), moderated mediation structural estimation (Stage 4), bias-corrected bootstrap inference (Stage 5), and a pre-registered robustness framework comprising seven complementary sensitivity checks (Stage 6).
  • Figure 2: Structural path diagram for the moderated mediation model. Solid lines denote paths; the dashed line indicates the interaction path. Standardized coefficients ($\beta$) are shown alongside each path. $R^2$ values indicate variance explained in the mediator and outcome equations.
  • Figure 3: Bootstrap distributions of conditional indirect effects at low ($-1\,SD$), mean, and high ($+1\,SD$) driving frequency (10,000 resamples). Solid vertical lines mark point estimates; red dashed lines indicate 95% bias-corrected confidence interval boundaries. All three intervals exclude zero.
  • Figure 4: Multigroup analysis by urbanicity. Path coefficients ($B$$\pm$ 95% CI) for the three structural paths across urban, suburban, and rural subsamples. The $b_1$ path (AI Orientation $\rightarrow$ Outcome) is stable across groups; the $a_1$ path (PCSC $\rightarrow$ AI Orientation) reaches significance only in suburban and rural contexts.
  • Figure 5: Cross-task specificity of the PCSC direct effect ($c'$) across six AI-vs.-human evaluation tasks from the HUMANVAI battery. The focal driving-task outcome is highlighted in red; non-driving tasks appear in blue. The positive direct effect of PCSC is statistically significant for the driving task ($p = 0.010$) and news writing ($p < 0.001$), negative for parole decisions ($p = 0.038$), and non-significant for medical diagnosis, hiring decisions, and loan decisions. Error bars represent 95% confidence intervals. $^{*}p < 0.05$; $^{**}p < 0.01$; $^{***}p < 0.001$.