Table of Contents
Fetching ...

Generalizability of experimental studies

Federico Matteucci, Vadim Arzamasov, Jose Cribeiro-Ramallo, Marco Heyden, Konstantin Ntounas, Klemens Böhm

TL;DR

This work addresses the lack of a principled measure for the generalizability of ML experimental studies. It introduces a probabilistic formalization in which experimental results are modeled as samples from a true distribution, and generalizability is quantified via distance-based comparisons of replicated study realizations, using ranking representations and the Maximum Mean Discrepancy (MMD). The authors instantiate this framework with ranking kernels (Borda, Jaccard, Mallows) and provide a practical method to estimate the necessary study size n*, accompanied by the genexpy Python package. They demonstrate the approach through case studies on categorical encoders and BIG-bench LLM tasks, illustrating how many preliminary experiments are needed to achieve generalizable conclusions and offering actionable guidance for designing robust experiments. The work aims to improve external validity and reproducibility by enabling principled planning and evaluation of generalizability in ML research.

Abstract

Experimental studies are a cornerstone of Machine Learning (ML) research. A common and often implicit assumption is that the study's results will generalize beyond the study itself, e.g., to new data. That is, repeating the same study under different conditions will likely yield similar results. Existing frameworks to measure generalizability, borrowed from the casual inference literature, cannot capture the complexity of the results and the goals of an ML study. The problem of measuring generalizability in the more general ML setting is thus still open, also due to the lack of a mathematical formalization of experimental studies. In this paper, we propose such a formalization, use it to develop a framework to quantify generalizability, and propose an instantiation based on rankings and the Maximum Mean Discrepancy. We show how our framework offers insights into the number of experiments necessary for a generalizable study, and how experimenters can benefit from it. Finally, we release the genexpy Python package, which allows for an effortless evaluation of the generalizability of other experimental studies.

Generalizability of experimental studies

TL;DR

This work addresses the lack of a principled measure for the generalizability of ML experimental studies. It introduces a probabilistic formalization in which experimental results are modeled as samples from a true distribution, and generalizability is quantified via distance-based comparisons of replicated study realizations, using ranking representations and the Maximum Mean Discrepancy (MMD). The authors instantiate this framework with ranking kernels (Borda, Jaccard, Mallows) and provide a practical method to estimate the necessary study size n*, accompanied by the genexpy Python package. They demonstrate the approach through case studies on categorical encoders and BIG-bench LLM tasks, illustrating how many preliminary experiments are needed to achieve generalizable conclusions and offering actionable guidance for designing robust experiments. The work aims to improve external validity and reproducibility by enabling principled planning and evaluation of generalizability in ML research.

Abstract

Experimental studies are a cornerstone of Machine Learning (ML) research. A common and often implicit assumption is that the study's results will generalize beyond the study itself, e.g., to new data. That is, repeating the same study under different conditions will likely yield similar results. Existing frameworks to measure generalizability, borrowed from the casual inference literature, cannot capture the complexity of the results and the goals of an ML study. The problem of measuring generalizability in the more general ML setting is thus still open, also due to the lack of a mathematical formalization of experimental studies. In this paper, we propose such a formalization, use it to develop a framework to quantify generalizability, and propose an instantiation based on rankings and the Maximum Mean Discrepancy. We show how our framework offers insights into the number of experiments necessary for a generalizable study, and how experimenters can benefit from it. Finally, we release the genexpy Python package, which allows for an effortless evaluation of the generalizability of other experimental studies.

Paper Structure

This paper contains 51 sections, 10 theorems, 49 equations, 6 figures, 1 table, 2 algorithms.

Key Result

Proposition 4.0

$\operatorname{MMD}_n \in \left[ 0, \sqrt{2\cdot\left( \kappa_\textnormal{sup}-\kappa_\textnormal{inf} \right)} \right]$, where $\kappa_\textnormal{sup} = \sup\limits_{x, y \in X} \kappa(x, y)$ and $\kappa_\textnormal{inf} = \inf\limits_{x,y \in X} \kappa(x, y)$.

Figures (6)

  • Figure 1: The $3$-Generalizability of the "checkmate-in-one" task (cf. Example \ref{['ex:checkmate_study']}), as the probability for two realizations to yield similar results according to some distance $d$, with $d_3 \coloneqq d(X, Y)$ if $X, Y \sim \mathbb{P}^3$. Note that the design factor $m$ is fixed, while the generalizability factor position varies.
  • Figure 2: Illustration of Algorithm \ref{['alg:nstar_empirical']}. Top: $n$-generalizability is estimated from $N$ preliminary experiments, for $n\in [2, \dots, \lfloor N/2 \rfloor]$. Bottom: A power-law relation is fitted to the $\alpha^*$-quantiles of the MMD and $n$, $n^*_N$ is the prediction at $\varepsilon^*$ ($\star$).
  • Figure 3: Number of necessary experiments $n^*$ to achieve generalizability for categorical encoders, for different desired generalizability $\alpha^*$, similarity threshold $\delta^*$ (the other fixed at $0.95$ and $0.05$ resp.), and research questions $\kappa$. The variation in the plot is due to the combinations of design factors.
  • Figure 4: Number of necessary experiments $n^*$ to achieve generalizability for LLMs, for different desired generalizability $\alpha^*$, similarity threshold $\delta^*$ (the other fixed at $0.95$ and $0.05$ resp.), and research questions $\kappa$. The variation in the plot is due to the combinations of design factors.
  • Figure 5: Relative error of the prediction of $n^*$ from $N$ preliminary experiments ($n^*_N$) for uniform distributions of rankings of ${n_{a}}$ alternatives $U_{n_{a}}$.
  • ...and 1 more figures

Theorems & Definitions (30)

  • Example 3.1
  • Example 3.2: continues=ex:checkmate_study
  • Definition 3.1: Experimental results
  • Definition 4.1: Generalizability
  • Definition 4.2: Generalizable study
  • Definition 4.3: Ranking
  • Example 4.1
  • Definition 4.4: MMD
  • Proposition 4.0
  • Example 4.2: continues=ex:checkmate_study
  • ...and 20 more