Table of Contents
Fetching ...

Safety, Security, and Cognitive Risks in World Models

Manoj Parmar

Abstract

World models -- learned internal simulators of environment dynamics -- are rapidly becoming foundational to autonomous decision-making in robotics, autonomous vehicles, and agentic AI. Yet this predictive power introduces a distinctive set of safety, security, and cognitive risks. Adversaries can corrupt training data, poison latent representations, and exploit compounding rollout errors to cause catastrophic failures in safety-critical deployments. World model-equipped agents are more capable of goal misgeneralisation, deceptive alignment, and reward hacking precisely because they can simulate the consequences of their own actions. Authoritative world model predictions further foster automation bias and miscalibrated human trust that operators lack the tools to audit. This paper surveys the world model landscape; introduces formal definitions of trajectory persistence and representational risk; presents a five-profile attacker capability taxonomy; and develops a unified threat model extending MITRE ATLAS and the OWASP LLM Top 10 to the world model stack. We provide an empirical proof-of-concept on trajectory-persistent adversarial attacks (GRU-RSSM: A_1 = 2.26x amplification, -59.5% reduction under adversarial fine-tuning; stochastic RSSM proxy: A_1 = 0.65x; DreamerV3 checkpoint: non-zero action drift confirmed). We illustrate risks through four deployment scenarios and propose interdisciplinary mitigations spanning adversarial hardening, alignment engineering, NIST AI RMF and EU AI Act governance, and human-factors design. We argue that world models must be treated as safety-critical infrastructure requiring the same rigour as flight-control software or medical devices.

Safety, Security, and Cognitive Risks in World Models

Abstract

World models -- learned internal simulators of environment dynamics -- are rapidly becoming foundational to autonomous decision-making in robotics, autonomous vehicles, and agentic AI. Yet this predictive power introduces a distinctive set of safety, security, and cognitive risks. Adversaries can corrupt training data, poison latent representations, and exploit compounding rollout errors to cause catastrophic failures in safety-critical deployments. World model-equipped agents are more capable of goal misgeneralisation, deceptive alignment, and reward hacking precisely because they can simulate the consequences of their own actions. Authoritative world model predictions further foster automation bias and miscalibrated human trust that operators lack the tools to audit. This paper surveys the world model landscape; introduces formal definitions of trajectory persistence and representational risk; presents a five-profile attacker capability taxonomy; and develops a unified threat model extending MITRE ATLAS and the OWASP LLM Top 10 to the world model stack. We provide an empirical proof-of-concept on trajectory-persistent adversarial attacks (GRU-RSSM: A_1 = 2.26x amplification, -59.5% reduction under adversarial fine-tuning; stochastic RSSM proxy: A_1 = 0.65x; DreamerV3 checkpoint: non-zero action drift confirmed). We illustrate risks through four deployment scenarios and propose interdisciplinary mitigations spanning adversarial hardening, alignment engineering, NIST AI RMF and EU AI Act governance, and human-factors design. We argue that world models must be treated as safety-critical infrastructure requiring the same rigour as flight-control software or medical devices.

Paper Structure

This paper contains 50 sections, 2 equations, 1 figure, 2 tables.

Figures (1)

  • Figure 1: Trajectory-Persistent Adversarial Attack Experiment (V3 Results).(A) Mean latent-state error ($\ell_2$) over $K = 30$ rollout steps following a single adversarial perturbation at $t = 0$ ($\varepsilon = 0.05$, $N = 200$ trials, GRU-based RSSM). The world model (WM, blue) amplifies the perturbation at step 1 ($\mathcal{A}_1 = 2.26\times$) before GRU contraction attenuates it; the single-step baseline (SS, orange) shows no state-mediated amplification. (B) Architecture comparison: trajectory amplification ratio $\mathcal{A}_k$ on a log scale for the deterministic GRU world model vs. a stochastic RSSM proxy (posterior at $t=0$, prior rollout thereafter). The RSSM proxy shows lower initial amplification ($\mathcal{A}_1 = 0.65\times$) and slower decay, confirming architecture-dependence. (C) Real DreamerV3 checkpoint probe (seed 0): per-metric bar chart showing $\mathcal{A}_1$, normalised latent error $E_1$, and action drift $\|\Delta a_1\|$. Non-zero coupling confirms representational perturbations propagate into policy outputs. (D) Mitigation effect: adversarial fine-tuning (PGD-10 on $t=0$) reduces $\mathcal{A}_k$ substantially at all steps (before: solid; after: dashed). (E) Perturbation budget sensitivity before and after mitigation; the hardened model maintains lower error across the full $\varepsilon$ range. (F) Absolute cumulative reward (clean vs. perturbed) as a function of planning horizon $H$; the reward gap at $H=30$ is $0.000892 \pm 0.000057$. All error bands show $\pm 1$ SE.