Table of Contents
Fetching ...

"What Did It Actually Do?": Understanding Risk Awareness and Traceability for Computer-Use Agents

Zifan Peng, Mingchen Li

Abstract

Personalized computer-use agents are rapidly moving from expert communities into mainstream use. Unlike conventional chatbots, these systems can install skills, invoke tools, access private resources, and modify local environments on users' behalf. Yet users often do not know what authority they have delegated, what the agent actually did during task execution, or whether the system has been safely removed afterward. We investigate this gap as a combined problem of risk understanding and post-hoc auditability, using OpenClaw as a motivating case. We first build a multi-source corpus of the OpenClaw ecosystem, including incidents, advisories, malicious-skill reports, news coverage, tutorials, and social-media narratives. We then conduct an interview study to examine how users and practitioners understand skills, autonomy, privilege, persistence, and uninstallation. Our findings suggest that participants often recognized these systems as risky in the abstract, but lacked concrete mental models of what skills can do, what resources agents can access, and what changes may remain after execution or removal. Motivated by these findings, we propose AgentTrace, a traceability framework and prototype interface for visualizing agent actions, touched resources, permission history, provenance, and persistent side effects. A scenario-based evaluation suggests that traceability-oriented interfaces can improve understanding of agent behavior, support anomaly detection, and foster more calibrated trust.

"What Did It Actually Do?": Understanding Risk Awareness and Traceability for Computer-Use Agents

Abstract

Personalized computer-use agents are rapidly moving from expert communities into mainstream use. Unlike conventional chatbots, these systems can install skills, invoke tools, access private resources, and modify local environments on users' behalf. Yet users often do not know what authority they have delegated, what the agent actually did during task execution, or whether the system has been safely removed afterward. We investigate this gap as a combined problem of risk understanding and post-hoc auditability, using OpenClaw as a motivating case. We first build a multi-source corpus of the OpenClaw ecosystem, including incidents, advisories, malicious-skill reports, news coverage, tutorials, and social-media narratives. We then conduct an interview study to examine how users and practitioners understand skills, autonomy, privilege, persistence, and uninstallation. Our findings suggest that participants often recognized these systems as risky in the abstract, but lacked concrete mental models of what skills can do, what resources agents can access, and what changes may remain after execution or removal. Motivated by these findings, we propose AgentTrace, a traceability framework and prototype interface for visualizing agent actions, touched resources, permission history, provenance, and persistent side effects. A scenario-based evaluation suggests that traceability-oriented interfaces can improve understanding of agent behavior, support anomaly detection, and foster more calibrated trust.

Paper Structure

This paper contains 51 sections, 4 figures, 3 tables.

Figures (4)

  • Figure 1: Problem framing of this paper. Users delegate tasks and authority to personalized computer-use agents through skills, tutorials, and setup choices, yet the agent’s execution can remain opaque across files, tools, network access, and persistent system changes. We propose AgentTrace, a traceability-oriented interface that makes actions, touched resources, permissions, provenance, and residual side effects legible after task execution.
  • Figure 2: Conceptual decomposition of a personalized computer-use agent. Such systems combine mixed-trust inputs, an agent core, execution surfaces, persistent state, extensibility mechanisms, and user-visible outputs. This structure helps explain why users may struggle to understand what the agent can access, what it changed, and what remains after task execution.
  • Figure 3: AgentTrace, our traceability-oriented prototype for personalized computer-use agents. The interface combines five coordinated views for post-hoc auditing: a task timeline, a resource touch map, a permission and authority history, an action provenance inspector, and a persistent change summary. Together, these views help users reconstruct what the agent did, what it touched, under what authority it acted, why actions occurred, and what residual changes remained after execution.
  • Figure 4: AgentTrace turns opaque agent execution into post-hoc audit support. Starting from a high-level user request, personalized computer-use agents may perform multi-step operations involving tools, imported skills, external content, and persistent system changes. AgentTrace organizes this behavior into five coordinated views---task timeline, resource touchpoints, permission history, action provenance, and persistent change summary---to help users reconstruct what happened and determine whether follow-up review or remediation is needed.