Generalizability of experimental studies
Federico Matteucci, Vadim Arzamasov, Jose Cribeiro-Ramallo, Marco Heyden, Konstantin Ntounas, Klemens Böhm
TL;DR
This work addresses the lack of a principled measure for the generalizability of ML experimental studies. It introduces a probabilistic formalization in which experimental results are modeled as samples from a true distribution, and generalizability is quantified via distance-based comparisons of replicated study realizations, using ranking representations and the Maximum Mean Discrepancy (MMD). The authors instantiate this framework with ranking kernels (Borda, Jaccard, Mallows) and provide a practical method to estimate the necessary study size n*, accompanied by the genexpy Python package. They demonstrate the approach through case studies on categorical encoders and BIG-bench LLM tasks, illustrating how many preliminary experiments are needed to achieve generalizable conclusions and offering actionable guidance for designing robust experiments. The work aims to improve external validity and reproducibility by enabling principled planning and evaluation of generalizability in ML research.
Abstract
Experimental studies are a cornerstone of Machine Learning (ML) research. A common and often implicit assumption is that the study's results will generalize beyond the study itself, e.g., to new data. That is, repeating the same study under different conditions will likely yield similar results. Existing frameworks to measure generalizability, borrowed from the casual inference literature, cannot capture the complexity of the results and the goals of an ML study. The problem of measuring generalizability in the more general ML setting is thus still open, also due to the lack of a mathematical formalization of experimental studies. In this paper, we propose such a formalization, use it to develop a framework to quantify generalizability, and propose an instantiation based on rankings and the Maximum Mean Discrepancy. We show how our framework offers insights into the number of experiments necessary for a generalizable study, and how experimenters can benefit from it. Finally, we release the genexpy Python package, which allows for an effortless evaluation of the generalizability of other experimental studies.
