Table of Contents
Fetching ...

Automatically Generating Hard Math Problems from Hypothesis-Driven Error Analysis

Jiayu Fu, Mourad Heddaya, Chenhao Tan

Abstract

Numerous math benchmarks exist to evaluate LLMs' mathematical capabilities. However, most involve extensive manual effort and are difficult to scale. Consequently, they cannot keep pace with LLM development or easily provide new instances to mitigate overfitting. Some researchers have proposed automatic benchmark generation methods, but few focus on identifying the specific math concepts and skills on which LLMs are error-prone, and most can only generate category-specific benchmarks. To address these limitations, we propose a new math benchmark generation pipeline that uses AI-generated hypotheses to identify the specific math concepts and skills that LLMs struggle with, and then generates new benchmark problems targeting these weaknesses. Experiments show that hypothesis accuracy positively correlates with the difficulty of the generated problems: problems generated from the most accurate hypotheses reduce Llama-3.3-70B-Instruct's accuracy to as low as 45%, compared to 77% on the original MATH benchmark. Furthermore, our pipeline is highly adaptable and can be applied beyond math to explore a wide range of LLM capabilities, making it a valuable tool for investigating how LLMs perform across different domains.

Automatically Generating Hard Math Problems from Hypothesis-Driven Error Analysis

Abstract

Numerous math benchmarks exist to evaluate LLMs' mathematical capabilities. However, most involve extensive manual effort and are difficult to scale. Consequently, they cannot keep pace with LLM development or easily provide new instances to mitigate overfitting. Some researchers have proposed automatic benchmark generation methods, but few focus on identifying the specific math concepts and skills on which LLMs are error-prone, and most can only generate category-specific benchmarks. To address these limitations, we propose a new math benchmark generation pipeline that uses AI-generated hypotheses to identify the specific math concepts and skills that LLMs struggle with, and then generates new benchmark problems targeting these weaknesses. Experiments show that hypothesis accuracy positively correlates with the difficulty of the generated problems: problems generated from the most accurate hypotheses reduce Llama-3.3-70B-Instruct's accuracy to as low as 45%, compared to 77% on the original MATH benchmark. Furthermore, our pipeline is highly adaptable and can be applied beyond math to explore a wide range of LLM capabilities, making it a valuable tool for investigating how LLMs perform across different domains.

Paper Structure

This paper contains 43 sections, 13 figures, 1 table.

Figures (13)

  • Figure 1: Overview of the three-stage generation pipeline: (1) filter problems that the target LLM consistently fails, (2) generate hypotheses about the concepts and skills underlying those failures, and (3) generate new problems guided by the hypotheses.
  • Figure 2: Hypothesis accuracy distributions across granularity levels (using GPT-4.1-mini during hypotheses generation). The low-granularity prompt achieves the highest median and quartile accuracies. Accuracy increases from extremely low to low granularity, then decreases as granularity increases further.
  • Figure 3: Number of Hypotheses with Accuracies Over 0.8 Using GPT-4.1-mini under Different Prompts for Hypotheses Generation
  • Figure 4: Llama-3.3-70B-Instruct solve rates on generated problems under hypotheses generated with different prompts. The model's solve rate on the original MATH benchmark is 77% meta-llama_llama-3.3-70b-instruct_2024. Solve rate decreases from extremely low to low granularity, then increases from low to high, mirroring the trend in the number of high-accuracy hypotheses (Figure \ref{['fig:high accuracy hypotheses count']}).
  • Figure 5: Hypotheses Accuracy Trending for GPT4Omini Model under Different Prompts for Hypotheses Generation
  • ...and 8 more figures