Table of Contents
Fetching ...

Personalized AI Practice Replicates Learning Rate Regularity at Scale

Jocelyn Beauchesne, Christine Maroti, Jeshua Bratman, Jerome Pesenti, Laurence Holt, Alex Tambellini, Allison McGrath, Matthew Guo, Sarah Peterson

Abstract

Recent research demonstrated that students exhibit consistent learning rates across diverse educational contexts. We test these findings using a dataset of 1.8 million (366k post-filtering) student interactions from the digital platform Campus AI providing further evidence to the observation of regularity in learning rate among students. Unlike prior work requiring manual cognitive modeling, Campus AI automatically generates Knowledge Components (KCs) and corresponding exercises, both of which are validated by human experts. This one-to-many mapping facilitates the application of Additive Factors Models to measure learning parameters without complex cognitive modeling. Using mixed-effects logistic regression, we confirmed the core finding of prior work: students displayed substantial variation in initial knowledge ($\text{IQR} = [2.78, 12.18]$ practice opportunities to reach 80% mastery) but remarkably consistent learning rates ($\text{IQR} = [7.01, 8.25]$ opportunities). Furthermore, students using this fully automated system achieved 80% mastery in a median of 7.22 practice opportunities, comparable to the 6.54 reported for expert-designed curricula. These results suggest that automated, science-grounded content generation can support effective personalized learning at scale. Data and code are publicly available. https://github.com/Campus-edu-AI/learning-rate

Personalized AI Practice Replicates Learning Rate Regularity at Scale

Abstract

Recent research demonstrated that students exhibit consistent learning rates across diverse educational contexts. We test these findings using a dataset of 1.8 million (366k post-filtering) student interactions from the digital platform Campus AI providing further evidence to the observation of regularity in learning rate among students. Unlike prior work requiring manual cognitive modeling, Campus AI automatically generates Knowledge Components (KCs) and corresponding exercises, both of which are validated by human experts. This one-to-many mapping facilitates the application of Additive Factors Models to measure learning parameters without complex cognitive modeling. Using mixed-effects logistic regression, we confirmed the core finding of prior work: students displayed substantial variation in initial knowledge ( practice opportunities to reach 80% mastery) but remarkably consistent learning rates ( opportunities). Furthermore, students using this fully automated system achieved 80% mastery in a median of 7.22 practice opportunities, comparable to the 6.54 reported for expert-designed curricula. These results suggest that automated, science-grounded content generation can support effective personalized learning at scale. Data and code are publicly available. https://github.com/Campus-edu-AI/learning-rate

Paper Structure

This paper contains 29 sections, 2 equations, 3 figures, 6 tables.

Figures (3)

  • Figure 1: Observed learning curve versus statistical model predictions across practice opportunities on a Knowledge Component. The solid orange line shows empirical accuracy rates from student practice data. The dashed blue line represents the population-level prediction from a mixed-effects logistic regression model (iAFM framework). Students show consistent improvement from approximately 73% to 85% accuracy over 18 practice opportunities.
  • Figure 2: Parameter distributions from the base mixed-effects logistic regression model (iAFM framework) for n=7161 students. Both distributions show the individual-level variation around population parameters, with initial abilities displaying greater heterogeneity than learning rates. Quartile boundaries (Q1, Q3) and 95% confidence intervals are marked for reference.
  • Figure 3: Scatter plot the course subject factor effects, Average $\theta_{\text{course subject}}$ against $\delta_{\text{course subject}}$ averaged across ablation Models 2, 4, 6, and 7.