Table of Contents
Fetching ...

LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks

Anna Bavaresco, Raffaella Bernardi, Leonardo Bertolazzi, Desmond Elliott, Raquel Fernández, Albert Gatt, Esam Ghaleb, Mario Giulianelli, Michael Hanna, Alexander Koller, André F. T. Martins, Philipp Mondorf, Vera Neplenbroek, Sandro Pezzelle, Barbara Plank, David Schlangen, Alessandro Suglia, Aditya K Surikuchi, Ece Takmaz, Alberto Testoni

TL;DR

This study systematically probes whether large language models can replace human judges for NLP evaluation. By assembling Judge-Bench, a large, extensible suite of 20 human-annotated datasets, and evaluating 11 LLMs across varied tasks and data sources, it reveals wide variability in alignment with human judgments and highlights task- and data-source–dependent limits. The findings show LLM evaluators can be reliable for some properties (e.g., instruction following) but are inconsistent across tasks, with safety-related judgments particularly challenging. The work emphasizes task-specific validation and provides Judge-Bench as a resource to calibrate and compare LLM-based evaluations, while noting limitations and proposing future directions like pairwise judgments and multilingual extensions.

Abstract

There is an increasing trend towards evaluating NLP models with LLMs instead of human judgments, raising questions about the validity of these evaluations, as well as their reproducibility in the case of proprietary models. We provide JUDGE-BENCH, an extensible collection of 20 NLP datasets with human annotations covering a broad range of evaluated properties and types of data, and comprehensively evaluate 11 current LLMs, covering both open-weight and proprietary models, for their ability to replicate the annotations. Our evaluations show substantial variance across models and datasets. Models are reliable evaluators on some tasks, but overall display substantial variability depending on the property being evaluated, the expertise level of the human judges, and whether the language is human or model-generated. We conclude that LLMs should be carefully validated against human judgments before being used as evaluators.

LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks

TL;DR

This study systematically probes whether large language models can replace human judges for NLP evaluation. By assembling Judge-Bench, a large, extensible suite of 20 human-annotated datasets, and evaluating 11 LLMs across varied tasks and data sources, it reveals wide variability in alignment with human judgments and highlights task- and data-source–dependent limits. The findings show LLM evaluators can be reliable for some properties (e.g., instruction following) but are inconsistent across tasks, with safety-related judgments particularly challenging. The work emphasizes task-specific validation and provides Judge-Bench as a resource to calibrate and compare LLM-based evaluations, while noting limitations and proposing future directions like pairwise judgments and multilingual extensions.

Abstract

There is an increasing trend towards evaluating NLP models with LLMs instead of human judgments, raising questions about the validity of these evaluations, as well as their reproducibility in the case of proprietary models. We provide JUDGE-BENCH, an extensible collection of 20 NLP datasets with human annotations covering a broad range of evaluated properties and types of data, and comprehensively evaluate 11 current LLMs, covering both open-weight and proprietary models, for their ability to replicate the annotations. Our evaluations show substantial variance across models and datasets. Models are reliable evaluators on some tasks, but overall display substantial variability depending on the property being evaluated, the expertise level of the human judges, and whether the language is human or model-generated. We conclude that LLMs should be carefully validated against human judgments before being used as evaluators.

Paper Structure

This paper contains 35 sections, 9 figures, 4 tables.

Figures (9)

  • Figure 1: Evaluation by expert and non-expert human annotators and by LLMs for two tasks involving human-generated (left) and machine-generated text (right).
  • Figure 2: Average model correlation with human experts vs. non-experts in datasets with graded annotations.
  • Figure 3: Correlation for properties with graded judgments. Averages and error bars when the property is present in more than one dataset.
  • Figure 4: Scores (Cohen's $\kappa$ for categorical annotations and Spearman's correlation for graded annotations) on test items involving human language vs. machine-generated outputs.
  • Figure 5: Valid response rate per model.
  • ...and 4 more figures