Table of Contents
Fetching ...

Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents

Mohammad Hossein Chinaei

Abstract

Tool-calling LLM agents can read private data, invoke external services, and trigger real-world actions, creating a security problem at the point of tool execution. We identify a denial-feedback leakage pattern, which we term causality laundering, in which an adversary probes a protected action, learns from the denial outcome, and exfiltrates the inferred information through a later seemingly benign tool call. This attack is not captured by flat provenance tracking alone because the leaked information arises from causal influence of the denied action, not direct data flow. We present the Agentic Reference Monitor (ARM), a runtime enforcement layer that mediates every tool invocation by consulting a provenance graph over tool calls, returned data, field-level provenance, and denied actions. ARM propagates trust through an integrity lattice and augments the graph with counterfactual edges from denied-action nodes, enabling enforcement over both transitive data dependencies and denial-induced causal influence. In a controlled evaluation on three representative attack scenarios, ARM blocks causality laundering, transitive taint propagation, and mixed-provenance field misuse that a flat provenance baseline misses, while adding sub-millisecond policy evaluation overhead. These results suggest that denial-aware causal provenance is a useful abstraction for securing tool-calling agent systems.

Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents

Abstract

Tool-calling LLM agents can read private data, invoke external services, and trigger real-world actions, creating a security problem at the point of tool execution. We identify a denial-feedback leakage pattern, which we term causality laundering, in which an adversary probes a protected action, learns from the denial outcome, and exfiltrates the inferred information through a later seemingly benign tool call. This attack is not captured by flat provenance tracking alone because the leaked information arises from causal influence of the denied action, not direct data flow. We present the Agentic Reference Monitor (ARM), a runtime enforcement layer that mediates every tool invocation by consulting a provenance graph over tool calls, returned data, field-level provenance, and denied actions. ARM propagates trust through an integrity lattice and augments the graph with counterfactual edges from denied-action nodes, enabling enforcement over both transitive data dependencies and denial-induced causal influence. In a controlled evaluation on three representative attack scenarios, ARM blocks causality laundering, transitive taint propagation, and mixed-provenance field misuse that a flat provenance baseline misses, while adding sub-millisecond policy evaluation overhead. These results suggest that denial-aware causal provenance is a useful abstraction for securing tool-calling agent systems.

Paper Structure

This paper contains 74 sections, 3 theorems, 5 equations, 1 figure, 2 tables, 1 algorithm.

Key Result

Theorem 1

In an ARM-protected system, there exists no causal path in the provenance graph $G$ from a node with trust level below threshold $\theta$ to an Allow verdict on a tool call that does not traverse at least one enforcement check. $\blacktriangleleft$$\blacktriangleleft$

Figures (1)

  • Figure 1: The ARM layered policy pipeline. Each layer can independently deny a tool call.

Theorems & Definitions (11)

  • Definition 1: Integrity Lattice
  • Definition 2: Causality Laundering
  • Definition 3: Provenance Graph
  • Definition 4: Minimum Reachable Trust
  • Definition 5: Field-Level Trust Override
  • Theorem 1: Mediated Integrity
  • proof
  • Corollary 1: Defense in Depth
  • proof
  • Lemma 1: Monotonic Taint Propagation
  • ...and 1 more