Table of Contents
Fetching ...

BenchScope: How Many Independent Signals Does Your Benchmark Provide?

Tommy Sha, Stella Zhao

Abstract

AI evaluation suites often report many scores without checking whether those scores carry independent information. We introduce Effective Dimensionality (ED), the participation ratio of a centered benchmark-score spectrum, as a fast, population-conditional upper-bound diagnostic of measurement breadth. Applied at per-instance granularity to 22 benchmarks across 8 domains and more than 8,400 model evaluations, ED reveals substantial redundancy: the six-score Open LLM Leaderboard behaves like roughly two effective measurement axes (ED = 1.7), BBH and MMLU-Pro are near-interchangeable (rho = 0.96, stable across seven subpopulations), and measurement breadth varies more than 20x across current benchmarks. We show that relative ED rankings are stable under matched-dimension controls and that ED can flag redundant suite components, monitor performance-conditional compression, and guide benchmark maintenance. Because binary spectra overestimate absolute latent dimensionality, we interpret ED as a screening statistic rather than a literal factor count and complement it with null, reliability, and saturation analyses. We provide a 22-benchmark reference atlas and a four-step diagnostic workflow that benchmark maintainers can run with a score matrix and a few lines of code.

BenchScope: How Many Independent Signals Does Your Benchmark Provide?

Abstract

AI evaluation suites often report many scores without checking whether those scores carry independent information. We introduce Effective Dimensionality (ED), the participation ratio of a centered benchmark-score spectrum, as a fast, population-conditional upper-bound diagnostic of measurement breadth. Applied at per-instance granularity to 22 benchmarks across 8 domains and more than 8,400 model evaluations, ED reveals substantial redundancy: the six-score Open LLM Leaderboard behaves like roughly two effective measurement axes (ED = 1.7), BBH and MMLU-Pro are near-interchangeable (rho = 0.96, stable across seven subpopulations), and measurement breadth varies more than 20x across current benchmarks. We show that relative ED rankings are stable under matched-dimension controls and that ED can flag redundant suite components, monitor performance-conditional compression, and guide benchmark maintenance. Because binary spectra overestimate absolute latent dimensionality, we interpret ED as a screening statistic rather than a literal factor count and complement it with null, reliability, and saturation analyses. We provide a 22-benchmark reference atlas and a four-step diagnostic workflow that benchmark maintainers can run with a score matrix and a few lines of code.

Paper Structure

This paper contains 82 sections, 1 theorem, 4 equations, 12 figures, 6 tables, 1 algorithm.

Key Result

Theorem 1

Let $s_1, s_2$ be standardized benchmark scores with Pearson correlation $\rho$. For any composite $c = w_1 s_1 + w_2 s_2$ with $w_1, w_2 > 0$: The maximum is attained at equal weights $w_1 = w_2$.

Figures (12)

  • Figure 1: Redundancy structure in the Open LLM Leaderboard v2. (a) The pairwise correlation matrix (full 4,576-model population) shows that the six-score suite behaves like roughly two effective measurement axes ($\text{ED} = 1.7$): BBH and MMLU-Pro are near-duplicates ($\rho = 0.96$) while other pairs are only weakly correlated (e.g., IFEval--MuSR: $\rho = 0.32$). (b) A subset-specific pattern ($n = 188$ only): within a curated subset, MATH and MuSR are negatively correlated ($\rho = -0.64$); in the full population, the correlation is positive ($\rho = 0.50$; see Figure \ref{['fig:tradeoffs']}).
  • Figure 2: From matrix to metric: ED computation pipeline. A binary pass/fail matrix is task-centered, spectrally decomposed via SVD, and summarized by the participation ratio, a single closed-form scalar requiring no thresholds, no permutations, and no iterations.
  • Figure 3: Taxonomy $\neq$ measurement breadth. All three examples use per-instance ED as the common metric. BFCL's 20 categories yield per-instance ED = 1.5; MMLU-Pro's 14 disciplines yield category-level ED = 1.1; BigCodeBench's single label yields ED = 29 (raw), of which ${\sim}8$ PCs exceed the permutation null. Category counts can overstate or understate empirical breadth by an order of magnitude.
  • Figure 4: ED across 22 benchmarks, sorted by descending ED. BigCodeBench ($\text{ED}=29$) spans more than $20\times$ the raw dimensionality of GAIA and EvalPlus ($\text{ED}=1.2$). The variation is not explained by benchmark size alone; task-set design and matrix shape both contribute (Section \ref{['sec:what-drives-ed']}).
  • Figure 5: Pairwise correlations in the Open LLM Leaderboard v2. (a) MATH vs. MuSR scatter for the 188-model subset, colored by model size: the negative correlation ($\rho = -0.64$) is visible within size bands, but in the full 4,576-model population the correlation is positive ($\rho = +0.50$). (b) All 15 benchmark pairs sorted by Spearman $\rho$ with bootstrap 95% CIs. Six pairs are significantly negative (blue); two are redundant (red, $\rho > 0.5$). These negative correlations are specific to the 188-model subset and do not generalize to the full population (Section \ref{['subsec:conditional-correlations']}).
  • ...and 7 more figures

Theorems & Definitions (1)

  • Theorem 1: Composite correlation ceiling