Table of Contents
Fetching ...

ClawArena: Benchmarking AI Agents in Evolving Information Environments

Haonian Ji, Kaiwen Xiong, Siwei Han, Peng Xia, Shi Qiu, Yiyang Zhou, Jiaqi Liu, Jinlong Li, Bingzhou Li, Zeyu Zheng, Cihang Xie, Huaxiu Yao

Abstract

AI agents deployed as persistent assistants must maintain correct beliefs as their information environment evolves. In practice, evidence is scattered across heterogeneous sources that often contradict one another, new information can invalidate earlier conclusions, and user preferences surface through corrections rather than explicit instructions. Existing benchmarks largely assume static, single-authority settings and do not evaluate whether agents can keep up with this complexity. We introduce ClawArena, a benchmark for evaluating AI agents in evolving information environments. Each scenario maintains a complete hidden ground truth while exposing the agent only to noisy, partial, and sometimes contradictory traces across multi-channel sessions, workspace files, and staged updates. Evaluation is organized around three coupled challenges: multi-source conflict reasoning, dynamic belief revision, and implicit personalization, whose interactions yield a 14-category question taxonomy. Two question formats, multi-choice (set-selection) and shell-based executable checks, test both reasoning and workspace grounding. The current release contains 64 scenarios across 8 professional domains, totaling 1{,}879 evaluation rounds and 365 dynamic updates. Experiments on five agent frameworks and five language models show that both model capability (15.4% range) and framework design (9.2%) substantially affect performance, that self-evolving skill frameworks can partially close model-capability gaps, and that belief revision difficulty is determined by update design strategy rather than the mere presence of updates. Code is available at https://github.com/aiming-lab/ClawArena.

ClawArena: Benchmarking AI Agents in Evolving Information Environments

Abstract

AI agents deployed as persistent assistants must maintain correct beliefs as their information environment evolves. In practice, evidence is scattered across heterogeneous sources that often contradict one another, new information can invalidate earlier conclusions, and user preferences surface through corrections rather than explicit instructions. Existing benchmarks largely assume static, single-authority settings and do not evaluate whether agents can keep up with this complexity. We introduce ClawArena, a benchmark for evaluating AI agents in evolving information environments. Each scenario maintains a complete hidden ground truth while exposing the agent only to noisy, partial, and sometimes contradictory traces across multi-channel sessions, workspace files, and staged updates. Evaluation is organized around three coupled challenges: multi-source conflict reasoning, dynamic belief revision, and implicit personalization, whose interactions yield a 14-category question taxonomy. Two question formats, multi-choice (set-selection) and shell-based executable checks, test both reasoning and workspace grounding. The current release contains 64 scenarios across 8 professional domains, totaling 1{,}879 evaluation rounds and 365 dynamic updates. Experiments on five agent frameworks and five language models show that both model capability (15.4% range) and framework design (9.2%) substantially affect performance, that self-evolving skill frameworks can partially close model-capability gaps, and that belief revision difficulty is determined by update design strategy rather than the mere presence of updates. Code is available at https://github.com/aiming-lab/ClawArena.

Paper Structure

This paper contains 26 sections, 7 figures, 5 tables.

Figures (7)

  • Figure 1: Overview of ClawArena across 8 professional domains. Each scenario presents multi-channel session histories, workspace files, and evaluation questions requiring multi-source conflict reasoning, dynamic belief revision, and implicit personalization. The center logo reflects the benchmark's adversarial spirit: agents must "claw" through conflicting evidence to reach the ground truth.
  • Figure 2: Dataset composition of ClawArena. The inner ring shows 8 professional domains (64 scenarios, 1,879 rounds total); the outer ring breaks each domain into question types: multi-choice + executable checks (exec_check), Dynamic (multi-choice with updates), and Static (multi-choice only, no updates).
  • Figure 3: ClawArena construction pipeline. Real-world distributions and character profiles feed a three-stage bootstrap, producing 64 scenarios organized into six layers with three validation passes.
  • Figure 4: Per-option case study on two representative questions from ClawArena. Case 1 (MS-R): no configuration achieves a perfect score; the two highest-scoring configs (0.833) fail on structurally opposite options, revealing that similar aggregate scores can mask qualitatively different failure modes. Case 2 (DU-R): all three GPT-5.1 non-Claude-Code frameworks produce the identical wrong answer {A,B,C,D,F}, implicating a model-level narrative bias that Claude Code's quoting discipline corrects.
  • Figure 5: Case 3 (MS+DU): Self-diagnostic accuracy varies sharply across configurations after an update reveals contamination-rate discrepancies. Case 4 (P-R): implicit preference compliance audit; all configurations fail to detect an over-sensitivity threshold drift, and overt discrepancy (D) is universally undetected.
  • ...and 2 more figures