Table of Contents
Fetching ...

VarBench: Robust Language Model Benchmarking Through Dynamic Variable Perturbation

Kun Qian, Shunji Wan, Claudia Tang, Youzhi Wang, Xuanming Zhang, Maximillian Chen, Zhou Yu

TL;DR

VarBench introduces a dynamic benchmarking framework that perturbs test variables to combat data contamination in language model evaluation. By extracting variables, delexicalizing questions, and sampling from defined value ranges, VarBench generates fresh test cases for GSM8K, ARC, CommonsenseQA, and TruthfulQA, enabling more reliable assessment of true reasoning capabilities. Across open- and closed-source models, VarBench reveals substantial performance gaps compared to original benchmarks, providing evidence that many baselines may be influenced by training data leakage. The approach offers a practical path toward leakage-resistant evaluation and can guide future robustness research, though it requires careful human validation and expands the evaluation workflow beyond fixed test sets.

Abstract

As large language models achieve impressive scores on traditional benchmarks, an increasing number of researchers are becoming concerned about benchmark data leakage during pre-training, commonly known as the data contamination problem. To ensure fair evaluation, recent benchmarks release only the training and validation sets, keeping the test set labels closed-source. They require anyone wishing to evaluate his language model to submit the model's predictions for centralized processing and then publish the model's result on their leaderboard. However, this submission process is inefficient and prevents effective error analysis. To address this issue, we propose to variabilize benchmarks and evaluate language models dynamically. Specifically, we extract variables from each test case and define a value range for each variable. For each evaluation, we sample new values from these value ranges to create unique test cases, thus ensuring a fresh evaluation each time. We applied this variable perturbation method to four datasets: GSM8K, ARC, CommonsenseQA, and TruthfulQA, which cover mathematical generation and multiple-choice tasks. Our experimental results demonstrate that this approach provides a more accurate assessment of the true capabilities of language models, effectively mitigating the contamination problem.

VarBench: Robust Language Model Benchmarking Through Dynamic Variable Perturbation

TL;DR

VarBench introduces a dynamic benchmarking framework that perturbs test variables to combat data contamination in language model evaluation. By extracting variables, delexicalizing questions, and sampling from defined value ranges, VarBench generates fresh test cases for GSM8K, ARC, CommonsenseQA, and TruthfulQA, enabling more reliable assessment of true reasoning capabilities. Across open- and closed-source models, VarBench reveals substantial performance gaps compared to original benchmarks, providing evidence that many baselines may be influenced by training data leakage. The approach offers a practical path toward leakage-resistant evaluation and can guide future robustness research, though it requires careful human validation and expands the evaluation workflow beyond fixed test sets.

Abstract

As large language models achieve impressive scores on traditional benchmarks, an increasing number of researchers are becoming concerned about benchmark data leakage during pre-training, commonly known as the data contamination problem. To ensure fair evaluation, recent benchmarks release only the training and validation sets, keeping the test set labels closed-source. They require anyone wishing to evaluate his language model to submit the model's predictions for centralized processing and then publish the model's result on their leaderboard. However, this submission process is inefficient and prevents effective error analysis. To address this issue, we propose to variabilize benchmarks and evaluate language models dynamically. Specifically, we extract variables from each test case and define a value range for each variable. For each evaluation, we sample new values from these value ranges to create unique test cases, thus ensuring a fresh evaluation each time. We applied this variable perturbation method to four datasets: GSM8K, ARC, CommonsenseQA, and TruthfulQA, which cover mathematical generation and multiple-choice tasks. Our experimental results demonstrate that this approach provides a more accurate assessment of the true capabilities of language models, effectively mitigating the contamination problem.

Paper Structure

This paper contains 51 sections, 1 equation, 9 figures, 13 tables.

Figures (9)

  • Figure 1: Delexicalized version of a question from the GSM8K test set. Existing LLMs can "solve" the question correctly when given the original text. After replacing the delexicalized variables with new values, the reasoning capabilities of such LLMs seems to falter.
  • Figure 2: Data construction flow. We prompt LLM to extract variables and generate delexicalized questions, solution functions, and value ranges. To construct a new test case, we sample new values from the value range and combine them with the delexicalized questions. Besides, the solution functions take sampled values to compute new ground truth solutions.
  • Figure 3: Ablation study on the importance of variable replacement. We compared our variable-focused contamination benchmark against other alternative perturbation strategies (named in parentheses) in terms of percentage difference in VarBench's performance on the unperturbed original benchmarks.
  • Figure :
  • Figure :
  • ...and 4 more figures