Table of Contents
Fetching ...

Spark-LLM-Eval: A Distributed Framework for Statistically Rigorous Large Language Model Evaluation

Subhadip Mitra

Abstract

Evaluating large language models at scale remains a practical bottleneck for many organizations. While existing evaluation frameworks work well for thousands of examples, they struggle when datasets grow to hundreds of thousands or millions of samples. This scale is common when assessing model behavior across diverse domains or conducting comprehensive regression testing. We present Spark-LLM-Eval, a distributed evaluation framework built natively on Apache Spark. The system treats evaluation as a data-parallel problem, partitioningexamplesacrossexecutorsandaggregatingresultswithproperstatistical accounting. Beyond raw throughput, we emphasize statistical rigor: every reported metric includes bootstrap confidence intervals, and model comparisons come with appropriate significance tests (paired t-tests, McNemar's test, or Wilcoxon signed-rank, depending on the metric type). The framework also addresses the cost problem inherent in LLM evaluation through content-addressable response caching backed by Delta Lake, which allows iterating on metric definitions without re-running inference. We describe the system architecture, the statistical methodology, and report benchmark results showing linear scaling with cluster size. The framework and all evaluation code are available as open source.

Spark-LLM-Eval: A Distributed Framework for Statistically Rigorous Large Language Model Evaluation

Abstract

Evaluating large language models at scale remains a practical bottleneck for many organizations. While existing evaluation frameworks work well for thousands of examples, they struggle when datasets grow to hundreds of thousands or millions of samples. This scale is common when assessing model behavior across diverse domains or conducting comprehensive regression testing. We present Spark-LLM-Eval, a distributed evaluation framework built natively on Apache Spark. The system treats evaluation as a data-parallel problem, partitioningexamplesacrossexecutorsandaggregatingresultswithproperstatistical accounting. Beyond raw throughput, we emphasize statistical rigor: every reported metric includes bootstrap confidence intervals, and model comparisons come with appropriate significance tests (paired t-tests, McNemar's test, or Wilcoxon signed-rank, depending on the metric type). The framework also addresses the cost problem inherent in LLM evaluation through content-addressable response caching backed by Delta Lake, which allows iterating on metric definitions without re-running inference. We describe the system architecture, the statistical methodology, and report benchmark results showing linear scaling with cluster size. The framework and all evaluation code are available as open source.

Paper Structure

This paper contains 61 sections, 2 equations, 2 figures, 7 tables, 1 algorithm.

Figures (2)

  • Figure 1: System architecture. The evaluation runner orchestrates four stages: prompt preparation transforms raw data into model inputs using Jinja2 templates; distributed inference processes prompts through Pandas UDFs with per-executor rate limiting and caching; metric computation evaluates responses against references; statistical aggregation computes confidence intervals and significance tests.
  • Figure 2: Throughput scaling with executor count. Throughput increases linearly until API rate limits saturate (around 8 executors in this configuration). Error bars show standard deviation across 3 runs.