Table of Contents
Fetching ...

Rethinking Failure Attribution in Multi-Agent Systems: A Multi-Perspective Benchmark and Evaluation

Yeonjun In, Mehrab Tanjim, Jayakumar Subramanian, Sungchul Kim, Uttaran Bhattacharya, Wonjoong Kim, Sangwu Park, Somdeb Sarkhel, Chanyoung Park

Abstract

Failure attribution is essential for diagnosing and improving multi-agent systems (MAS), yet existing benchmarks and methods largely assume a single deterministic root cause for each failure. In practice, MAS failures often admit multiple plausible attributions due to complex inter-agent dependencies and ambiguous execution trajectories. We revisit MAS failure attribution from a multi-perspective standpoint and propose multi-perspective failure attribution, a practical paradigm that explicitly accounts for attribution ambiguity. To support this setting, we introduce MP-Bench, the first benchmark designed for multi-perspective failure attribution in MAS, along with a new evaluation protocol tailored to this paradigm. Through extensive experiments, we find that prior conclusions suggesting LLMs struggle with failure attribution are largely driven by limitations in existing benchmark designs. Our results highlight the necessity of multi-perspective benchmarks and evaluation protocols for realistic and reliable MAS debugging.

Rethinking Failure Attribution in Multi-Agent Systems: A Multi-Perspective Benchmark and Evaluation

Abstract

Failure attribution is essential for diagnosing and improving multi-agent systems (MAS), yet existing benchmarks and methods largely assume a single deterministic root cause for each failure. In practice, MAS failures often admit multiple plausible attributions due to complex inter-agent dependencies and ambiguous execution trajectories. We revisit MAS failure attribution from a multi-perspective standpoint and propose multi-perspective failure attribution, a practical paradigm that explicitly accounts for attribution ambiguity. To support this setting, we introduce MP-Bench, the first benchmark designed for multi-perspective failure attribution in MAS, along with a new evaluation protocol tailored to this paradigm. Through extensive experiments, we find that prior conclusions suggesting LLMs struggle with failure attribution are largely driven by limitations in existing benchmark designs. Our results highlight the necessity of multi-perspective benchmarks and evaluation protocols for realistic and reliable MAS debugging.

Paper Structure

This paper contains 35 sections, 1 equation, 8 figures, 7 tables.

Figures (8)

  • Figure 1: Motivating examples illustrating the multi-perspective nature of MAS failure attribution. (a) presents a simplified execution log of a MAS. (b) presents the example of deterministic failure attribution existing approaches assume (c) presents the example of multi-perspective failure attribution.
  • Figure 2: Overall framework of the (a) annotation process and (b) evaluation protocol of MP-Bench.
  • Figure 3: Analysis of MP-Bench highlighting the multi-perspective nature of MAS failure attribution. (a) Distribution of steps grouped by annotator consensus on failure annotations. (b) Inter-annotator disagreement rates for failure annotations.
  • Figure 4: Disagreement analysis of the LLM-based failure attribution system. (a) Pairwise disagreement rates across different samplings of GPT-5.1. (b) Pairwise disagreement rates across different LLMs.
  • Figure 5: Failure attribution performance across varying numbers of LLM runs ($N$) on MP-Bench. OSS denotes GPT-OSS-120B, and Sonnet denotes the Claude-Sonnet-4.5 model.
  • ...and 3 more figures