Table of Contents
Fetching ...

Towards Reward Modeling for AI Tutors in Math Mistake Remediation

Kseniia Petukhova, Ekaterina Kochmar

Abstract

Evaluating the pedagogical quality of AI tutors remains challenging: standard NLG metrics do not determine whether responses identify mistakes, scaffold reasoning, or avoid revealing the answers. For the task of mistake remediation, we derive a hierarchy of pedagogical aspects from human pairwise preferences on MRBench, and synthesize minimally contrastive response pairs that differ along key aspects (e.g., mistake identification and location, targetedness, scaffolding, actionability, clarity, and coherence). We develop and release Bradley-Terry preference models trained on weighted-sum rankings that we automatically create from MRBench, synthetic pairs, and data combinations. Using only synthetic data, our best model reaches 0.69 pairwise accuracy on a human preference test, and combining weighted-sum data with targeted synthetic groups improves accuracy to 0.74, outperforming larger general-purpose reward models while using only a 0.5B-parameter backbone.

Towards Reward Modeling for AI Tutors in Math Mistake Remediation

Abstract

Evaluating the pedagogical quality of AI tutors remains challenging: standard NLG metrics do not determine whether responses identify mistakes, scaffold reasoning, or avoid revealing the answers. For the task of mistake remediation, we derive a hierarchy of pedagogical aspects from human pairwise preferences on MRBench, and synthesize minimally contrastive response pairs that differ along key aspects (e.g., mistake identification and location, targetedness, scaffolding, actionability, clarity, and coherence). We develop and release Bradley-Terry preference models trained on weighted-sum rankings that we automatically create from MRBench, synthetic pairs, and data combinations. Using only synthetic data, our best model reaches 0.69 pairwise accuracy on a human preference test, and combining weighted-sum data with targeted synthetic groups improves accuracy to 0.74, outperforming larger general-purpose reward models while using only a 0.5B-parameter backbone.

Paper Structure

This paper contains 23 sections, 2 equations, 7 figures, 6 tables.

Figures (7)

  • Figure 1: Example of annotated tutor responses from MRBench. The Sonnet response is annotated as more actionable ("Yes" vs. "To some extent") because it prompts the student to recall the prefix for a five-sided shape rather than revealing it directly. In contrast, the Expert response is more encouraging in tone ("Encouraging" vs. "Neutral").
  • Figure 2: Pipeline for synthetic data augmentation. The procedure augments MRBench by generating aspect-specific improvements of suboptimal responses, jointly improved variants, and controlled degradations of optimal responses, thereby constructing structured preference pairs aligned with human annotation preferences. Suboptimal responses are those that do not receive desirable annotations in one or more of the following MRBench dimensions: Revealing the Answer, Providing Guidance, Actionability, and Coherence. In contrast, poor responses receive undesirable annotations across all four of these dimensions.
  • Figure 3: Illustrative contrastive examples for the hierarchy of pedagogical aspects.
  • Figure 4: Basic prompt template used for LLM-based preference annotation.
  • Figure 5: Prompt template with guidelines used for LLM-based preference annotation.
  • ...and 2 more figures