Table of Contents
Fetching ...

Agent psychometrics: Task-level performance prediction in agentic coding benchmarks

Chris Ge, Daria Kryvosheieva, Daniel Fried, Uzay Girit, Kaivalya Hariharan

Abstract

As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, understanding which tasks will challenge agents and why becomes increasingly difficult. This is compounded by current practice: agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of agent ability into LLM and scaffold ability components. This parameterization enables us to aggregate evaluation data across heterogeneous leaderboards and accurately predict task-level performance for unseen benchmarks, as well as unseen LLM-scaffold combinations. Our methods have practical utility for benchmark designers, who can better calibrate the difficulty of their new tasks without running computationally expensive agent evaluations.

Agent psychometrics: Task-level performance prediction in agentic coding benchmarks

Abstract

As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, understanding which tasks will challenge agents and why becomes increasingly difficult. This is compounded by current practice: agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of agent ability into LLM and scaffold ability components. This parameterization enables us to aggregate evaluation data across heterogeneous leaderboards and accurately predict task-level performance for unseen benchmarks, as well as unseen LLM-scaffold combinations. Our methods have practical utility for benchmark designers, who can better calibrate the difficulty of their new tasks without running computationally expensive agent evaluations.

Paper Structure

This paper contains 42 sections, 3 equations, 4 figures, 14 tables.

Figures (4)

  • Figure 1: Agent and task features predicting success probability. We illustrate the feature sources from which we derive estimates of an agent's ability score and a task's difficulty score. Then, using the estimated agent ability and task difficulty, we apply the logistic model from IRT baker2001basics to predict the probability that the agent succeeds on the task.
  • Figure 2: Validation of decomposition. Strong correlation (Pearson $r=0.974$) between agent abilities learned on a fixed scaffold (Terminus 2) versus LLM abilities isolated via our decomposition method.
  • Figure 3: Choosing Effective Subsets of a Benchmark for Evaluation Via Adaptive Task Selection.IRT (Predicted) uses the predicted IRT difficulty scores from a multi-benchmark model trained without SWE-bench Pro response data. IRT (Oracle) uses IRT difficulty scores, unrealistically calibrated with full response data. Random simply selects tasks at random.
  • Figure 4: Task difficulty histograms. SWE-bench Pro is harder on average than Verified, GSO is the hardest, and Terminal-Bench 2.0 is highly heterogeneous.