Table of Contents
Fetching ...

Efficient Detection of Bad Benchmark Items with Novel Scalability Coefficients

Michael Hardy, Joshua Gilbert, Benjamin Domingue

Abstract

The validity of assessments, from large-scale AI benchmarks to human classrooms, depends on the quality of individual items, yet modern evaluation instruments often contain thousands of items with minimal psychometric vetting. We introduce a new family of nonparametric scalability coefficients based on interitem isotonic regression for efficiently detecting globally bad items (e.g., miskeyed, ambiguously worded, or construct-misaligned). The central contribution is the signed isotonic $R^2$, which measures the maximal proportion of variance in one item explainable by a monotone function of another while preserving the direction of association via Kendall's $τ$. Aggregating these pairwise coefficients yields item-level scores that sharply separate problematic items from acceptable ones without assuming linearity or committing to a parametric item response model. We show that the signed isotonic $R^2$ is extremal among monotone predictors (it extracts the strongest possible monotone signal between any two items) and show that this optimality property translates directly into practical screening power. Across three AI benchmark datasets (HS Math, GSM8K, MMLU) and two human assessment datasets, the signed isotonic $R^2$ consistently achieves top-tier AUC for ranking bad items above good ones, outperforming or matching a comprehensive battery of classical test theory, item response theory, and dimensionality-based diagnostics. Crucially, the method remains robust under the small-n/large-p conditions typical of AI evaluation, requires only bivariate monotone fits computable in seconds, and handles mixed item types (binary, ordinal, continuous) without modification. It is a lightweight, model-agnostic filter that can materially reduce the reviewer effort needed to find flawed items in modern large-scale evaluation regimes.

Efficient Detection of Bad Benchmark Items with Novel Scalability Coefficients

Abstract

The validity of assessments, from large-scale AI benchmarks to human classrooms, depends on the quality of individual items, yet modern evaluation instruments often contain thousands of items with minimal psychometric vetting. We introduce a new family of nonparametric scalability coefficients based on interitem isotonic regression for efficiently detecting globally bad items (e.g., miskeyed, ambiguously worded, or construct-misaligned). The central contribution is the signed isotonic , which measures the maximal proportion of variance in one item explainable by a monotone function of another while preserving the direction of association via Kendall's . Aggregating these pairwise coefficients yields item-level scores that sharply separate problematic items from acceptable ones without assuming linearity or committing to a parametric item response model. We show that the signed isotonic is extremal among monotone predictors (it extracts the strongest possible monotone signal between any two items) and show that this optimality property translates directly into practical screening power. Across three AI benchmark datasets (HS Math, GSM8K, MMLU) and two human assessment datasets, the signed isotonic consistently achieves top-tier AUC for ranking bad items above good ones, outperforming or matching a comprehensive battery of classical test theory, item response theory, and dimensionality-based diagnostics. Crucially, the method remains robust under the small-n/large-p conditions typical of AI evaluation, requires only bivariate monotone fits computable in seconds, and handles mixed item types (binary, ordinal, continuous) without modification. It is a lightweight, model-agnostic filter that can materially reduce the reviewer effort needed to find flawed items in modern large-scale evaluation regimes.

Paper Structure

This paper contains 64 sections, 1 theorem, 11 equations, 2 figures, 3 tables.

Key Result

Proposition 3.1

Fix item pair $(i,j)$ and consider predictors of $Y_j$ of the form $f(Y_i)$ where $f$ is non-decreasing. Let $\hat{f}$ be the isotonic regression solution as defined in Eq. eq:isotonic. Then for any non-decreasing $g$, and therefore $R^2_{i\rightarrow j}$ computed from $\hat{f}$ is the largest achievable $R^2$ among monotone predictors.

Figures (2)

  • Figure 1: Examples of Differences in Item Detection Efficiencies based on Technique:Bad Item detection as ordered by each item-fit metric. iso = $M_{iso}$; tau = $M_\tau$; adj_depth = Isolation Forest Tree Depth (anomaly detection), z_outfit_3 = standardized absolute 3PL outfit statistic, a1_3 and a1_2 = discrimination parameter for 3PL and 2PL, respectively; Hi = $H_i$; Zi = $Z_i$, g_15 = loading on the general factor of a 15-factor estimation of McDonald's $\omega$; cmean = Mean inter-item tetrachoric correlation; cneg = proportion of inter-item tetrachoric correlations < 0; alpha_drop = benchmark reliability (Cronbach's $\alpha$) with item removed
  • Figure 2: Instrument Composition Permutation Bootstrap Results (n x p)

Theorems & Definitions (1)

  • Proposition 3.1: Maximal monotone explained variance