Table of Contents
Fetching ...

AgentWatcher: A Rule-based Prompt Injection Monitor

Yanting Wang, Wei Zou, Runpeng Geng, Jinyuan Jia

Abstract

Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt injection detection methods have the following limitations: (1) their effectiveness degrades significantly as context length increases, and (2) they lack explicit rules that define what constitutes prompt injection, causing detection decisions to be implicit, opaque, and difficult to reason about. In this work, we propose AgentWatcher to address the above two limitations. To address the first limitation, AgentWatcher attributes the LLM's output (e.g., the action of an agent) to a small set of causally influential context segments. By focusing detection on a relatively short text, AgentWatcher can be scalable to long contexts. To address the second limitation, we define a set of rules specifying what does and does not constitute a prompt injection, and use a monitor LLM to reason over these rules based on the attributed text, making the detection decisions more explainable. We conduct a comprehensive evaluation on tool-use agent benchmarks and long-context understanding datasets. The experimental results demonstrate that AgentWatcher can effectively detect prompt injection and maintain utility without attacks. The code is available at https://github.com/wang-yanting/AgentWatcher.

AgentWatcher: A Rule-based Prompt Injection Monitor

Abstract

Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt injection detection methods have the following limitations: (1) their effectiveness degrades significantly as context length increases, and (2) they lack explicit rules that define what constitutes prompt injection, causing detection decisions to be implicit, opaque, and difficult to reason about. In this work, we propose AgentWatcher to address the above two limitations. To address the first limitation, AgentWatcher attributes the LLM's output (e.g., the action of an agent) to a small set of causally influential context segments. By focusing detection on a relatively short text, AgentWatcher can be scalable to long contexts. To address the second limitation, we define a set of rules specifying what does and does not constitute a prompt injection, and use a monitor LLM to reason over these rules based on the attributed text, making the detection decisions more explainable. We conduct a comprehensive evaluation on tool-use agent benchmarks and long-context understanding datasets. The experimental results demonstrate that AgentWatcher can effectively detect prompt injection and maintain utility without attacks. The code is available at https://github.com/wang-yanting/AgentWatcher.

Paper Structure

This paper contains 28 sections, 5 equations, 4 figures, 12 tables, 1 algorithm.

Figures (4)

  • Figure 1: Compare AgentWatcher with 9 baselines on AgentDyn li2026agentdyn. The backbone LLM is GPT-4o. The baseline results are from the original paper li2026agentdyn.
  • Figure 2: As GRPO training progresses, the monitor LLM increasingly tends to explicitly mention the rules. The rule citation rate is computed as the number of LLM generations in a batch that explicitly mention rule numbers, divided by the total number of generations in that batch. The curve is smoothed using a running average with a window size of 500.
  • Figure 3: Impact of sink detection window size $w_s$, left expansion size $w_l$, right expansion size $w_r$, and number of windows $K$.
  • Figure 4: Compare the computational time of AgentWatcher with baselines. The benchmark is AgentDojo.