Table of Contents
Fetching ...

Near-Miss: Latent Policy Failure Detection in Agentic Workflows

Ella Rabinovich, David Boaz, Naama Zwerdling, Ateret Anaby-Tavor

Abstract

Agentic systems for business process automation often require compliance with policies governing conditional updates to the system state. Evaluation of policy adherence in LLM-based agentic workflows is typically performed by comparing the final system state against a predefined ground truth. While this approach detects explicit policy violations, it may overlook a more subtle class of issues in which agents bypass required policy checks, yet reach a correct outcome due to favorable circumstances. We refer to such cases as $\textit{near-misses}$ or $\textit{latent failures}$. In this work, we introduce a novel metric for detecting latent policy failures in agent conversations traces. Building on the ToolGuard framework, which converts natural-language policies into executable guard code, our method analyzes agent trajectories to determine whether agent's tool-calling decisions where sufficiently informed. We evaluate our approach on the $τ^2$-verified Airlines benchmark across several contemporary open and proprietary LLMs acting as agents. Our results show that latent failures occur in 8-17% of trajectories involving mutating tool calls, even when the final outcome matches the expected ground-truth state. These findings reveal a blind spot in current evaluation methodologies and highlight the need for metrics that assess not only final outcomes but also the decision process leading to them.

Near-Miss: Latent Policy Failure Detection in Agentic Workflows

Abstract

Agentic systems for business process automation often require compliance with policies governing conditional updates to the system state. Evaluation of policy adherence in LLM-based agentic workflows is typically performed by comparing the final system state against a predefined ground truth. While this approach detects explicit policy violations, it may overlook a more subtle class of issues in which agents bypass required policy checks, yet reach a correct outcome due to favorable circumstances. We refer to such cases as or . In this work, we introduce a novel metric for detecting latent policy failures in agent conversations traces. Building on the ToolGuard framework, which converts natural-language policies into executable guard code, our method analyzes agent trajectories to determine whether agent's tool-calling decisions where sufficiently informed. We evaluate our approach on the -verified Airlines benchmark across several contemporary open and proprietary LLMs acting as agents. Our results show that latent failures occur in 8-17% of trajectories involving mutating tool calls, even when the final outcome matches the expected ground-truth state. These findings reveal a blind spot in current evaluation methodologies and highlight the need for metrics that assess not only final outcomes but also the decision process leading to them.

Paper Structure

This paper contains 22 sections, 4 figures, 2 tables.

Figures (4)

  • Figure 1: Canceling a reservation upon a customer request. Assuming accurate information, this workflow leads to the same outcome either with reservation details check (the tool call encircled by the dashed line) or without it (following the bypassing dashed path).
  • Figure 2: Schematic ReAct agentic flow using ToolGuard -- cancellation eligibility is verified prior to cancel_reservation() invocation. Contrary to this approach, if policy adherence is left for the agent's best-effort, three possible outcomes exist: (1) agent explicitly validates the policy by fetching and inspecting reservation details (the desired path), (2) agent bypasses policy validation, which results in policy violation (captured by the benchmark since differs from the ground-truth state), and (3) agent bypasses policy validation, yet the flow results in a valid outcome since the policy is not violated (near-miss, the subject of this study).
  • Figure 3: Near-miss detection in a completed task trajectory: mutating tool call with arguments args (MTC(args)) detected $\rightarrow$ load and simulate MTC's guard code $\rightarrow$ if read-only tool call (RO) found, we search the trajectory history for precisely this, or any alternative read-only tool invocation, that obtains the same required information $\rightarrow$ if not found, assert latent failure to follow the policies.
  • Figure 4: On the left: Latent failure distribution by mutating tool: update_reservation_flights() is the tool that is invoked most frequently without ensuring that the system state complies with policies, such as flight status and seats availability. Additional tool --- update_reservation_passengers() --- is missing from the chart since near-misses were not observed for this function. On the right: the distribution of read-only tools causing near-misses: get_flight_status() is the tool most frequently bypassed by agents; indeed, our manual inspection reveals that flights are often updated without verifying that their status is "available" as required by policies. Consistently with Table \ref{['tbl:results-summary']}, slightly higher amount of latent failures is observed in closed models (blue) than in open models (orange).