Table of Contents
Fetching ...

AgentHazard: A Benchmark for Evaluating Harmful Behavior in Computer-Use Agents

Yunhao Feng, Yifan Ding, Yingshui Tan, Xingjun Ma, Yige Li, Yutao Wu, Yifeng Gao, Kun Zhai, Yanming Guo

Abstract

Computer-use agents extend language models from text generation to persistent action over tools, files, and execution environments. Unlike chat systems, they maintain state across interactions and translate intermediate outputs into concrete actions. This creates a distinct safety challenge in that harmful behavior may emerge through sequences of individually plausible steps, including intermediate actions that appear locally acceptable but collectively lead to unauthorized actions. We present \textbf{AgentHazard}, a benchmark for evaluating harmful behavior in computer-use agents. AgentHazard contains \textbf{2,653} instances spanning diverse risk categories and attack strategies. Each instance pairs a harmful objective with a sequence of operational steps that are locally legitimate but jointly induce unsafe behavior. The benchmark evaluates whether agents can recognize and interrupt harm arising from accumulated context, repeated tool use, intermediate actions, and dependencies across steps. We evaluate AgentHazard on Claude Code, OpenClaw, and IFlow using mostly open or openly deployable models from the Qwen3, Kimi, GLM, and DeepSeek families. Our experimental results indicate that current systems remain highly vulnerable. In particular, when powered by Qwen3-Coder, Claude Code exhibits an attack success rate of \textbf{73.63\%}, suggesting that model alignment alone does not reliably guarantee the safety of autonomous agents.

AgentHazard: A Benchmark for Evaluating Harmful Behavior in Computer-Use Agents

Abstract

Computer-use agents extend language models from text generation to persistent action over tools, files, and execution environments. Unlike chat systems, they maintain state across interactions and translate intermediate outputs into concrete actions. This creates a distinct safety challenge in that harmful behavior may emerge through sequences of individually plausible steps, including intermediate actions that appear locally acceptable but collectively lead to unauthorized actions. We present \textbf{AgentHazard}, a benchmark for evaluating harmful behavior in computer-use agents. AgentHazard contains \textbf{2,653} instances spanning diverse risk categories and attack strategies. Each instance pairs a harmful objective with a sequence of operational steps that are locally legitimate but jointly induce unsafe behavior. The benchmark evaluates whether agents can recognize and interrupt harm arising from accumulated context, repeated tool use, intermediate actions, and dependencies across steps. We evaluate AgentHazard on Claude Code, OpenClaw, and IFlow using mostly open or openly deployable models from the Qwen3, Kimi, GLM, and DeepSeek families. Our experimental results indicate that current systems remain highly vulnerable. In particular, when powered by Qwen3-Coder, Claude Code exhibits an attack success rate of \textbf{73.63\%}, suggesting that model alignment alone does not reliably guarantee the safety of autonomous agents.

Paper Structure

This paper contains 28 sections, 4 figures, 6 tables.

Figures (4)

  • Figure 1: Illustration of harmful task execution in computer-use agents. Harm may emerge only after multiple user turns, intermediate actions, and tool-mediated execution are composed across the trajectory.
  • Figure 2: Overview of the AgentHazard construction pipeline. We begin by defining a taxonomy of risk categories and attack strategies from vulnerability knowledge bases, prior literature, and manual curation. We then build task templates that embed harmful objectives within realistic workflows and use them to generate a large seed pool of candidate instances. These candidates are refined through execution based filtering in sandboxed agent environments, followed by LLM judging and human review. The final result is a curated benchmark for evaluating harmful behavior in computer-use agents.
  • Figure 3: Distribution of AgentHazard across risk categories and attack strategies. The heatmap shows the number of instances in each category strategy pair, while the marginal bar charts summarize totals by category and by strategy.
  • Figure 4: Attack success rate (%) by attack strategy. Colored lines represent individual backbone models; bold black contours represent the cross-model average. Left: Claude Code. Right: OpenClaw.