Table of Contents
Fetching ...

Your Agent, Their Asset: A Real-World Safety Analysis of OpenClaw

Zijun Wang, Haoqin Tu, Letian Zhang, Hardy Chen, Juncheng Wu, Xiangyan Liu, Zhenlong Yuan, Tianyu Pang, Michael Qizhe Shieh, Fengze Liu, Zeyu Zheng, Huaxiu Yao, Yuyin Zhou, Cihang Xie

Abstract

OpenClaw, the most widely deployed personal AI agent in early 2026, operates with full local system access and integrates with sensitive services such as Gmail, Stripe, and the filesystem. While these broad privileges enable high levels of automation and powerful personalization, they also expose a substantial attack surface that existing sandboxed evaluations fail to capture. To address this gap, we present the first real-world safety evaluation of OpenClaw and introduce the CIK taxonomy, which unifies an agent's persistent state into three dimensions, i.e., Capability, Identity, and Knowledge, for safety analysis. Our evaluations cover 12 attack scenarios on a live OpenClaw instance across four backbone models (Claude Sonnet 4.5, Opus 4.6, Gemini 3.1 Pro, and GPT-5.4). The results show that poisoning any single CIK dimension increases the average attack success rate from 24.6% to 64-74%, with even the most robust model exhibiting more than a threefold increase over its baseline vulnerability. We further assess three CIK-aligned defense strategies alongside a file-protection mechanism; however, the strongest defense still yields a 63.8% success rate under Capability-targeted attacks, while file protection blocks 97% of malicious injections but also prevents legitimate updates. Taken together, these findings show that the vulnerabilities are inherent to the agent architecture, necessitating more systematic safeguards to secure personal AI agents. Our project page is https://ucsc-vlaa.github.io/CIK-Bench.

Your Agent, Their Asset: A Real-World Safety Analysis of OpenClaw

Abstract

OpenClaw, the most widely deployed personal AI agent in early 2026, operates with full local system access and integrates with sensitive services such as Gmail, Stripe, and the filesystem. While these broad privileges enable high levels of automation and powerful personalization, they also expose a substantial attack surface that existing sandboxed evaluations fail to capture. To address this gap, we present the first real-world safety evaluation of OpenClaw and introduce the CIK taxonomy, which unifies an agent's persistent state into three dimensions, i.e., Capability, Identity, and Knowledge, for safety analysis. Our evaluations cover 12 attack scenarios on a live OpenClaw instance across four backbone models (Claude Sonnet 4.5, Opus 4.6, Gemini 3.1 Pro, and GPT-5.4). The results show that poisoning any single CIK dimension increases the average attack success rate from 24.6% to 64-74%, with even the most robust model exhibiting more than a threefold increase over its baseline vulnerability. We further assess three CIK-aligned defense strategies alongside a file-protection mechanism; however, the strongest defense still yields a 63.8% success rate under Capability-targeted attacks, while file protection blocks 97% of malicious injections but also prevents legitimate updates. Taken together, these findings show that the vulnerabilities are inherent to the agent architecture, necessitating more systematic safeguards to secure personal AI agents. Our project page is https://ucsc-vlaa.github.io/CIK-Bench.

Paper Structure

This paper contains 42 sections, 3 figures, 16 tables.

Figures (3)

  • Figure 1: Overview.(Left) OpenClaw’s persistent state spans three dimensions (Capability, Identity, and Knowledge, termed CIK), each exploitable through distinct poisoning mechanisms. (Right) We conduct the first real-world safety evaluation using a two-phase attack protocol across four backbone models, demonstrating that CIK poisoning yields consistently high attack success rates.
  • Figure 2: The Attack Workflow. We employ a 2-phase attack protocol: Phase 1 injects poisoned content into the agent's persistent state; Phase 2 triggers the harmful action in a subsequent session. The temporal separation ensures that attacks persist across sessions.
  • Figure 3: Case studies illustrating the three CIK attack dimensions. Each dimension exploits a different aspect of the agent's reasoning. Left (Knowledge): a fabricated refund habit in MEMORY.md alters what the agent believes, causing it to treat unauthorized batch refunds as routine. Middle (Identity): a planted backup URL in USER.md alters whom the agent trusts, causing it to upload credentials to an attacker-controlled destination. Right (Capability): a hidden rm -rf payload in a benign skill bypasses the agent's reasoning entirely, silently destroying its workspace.