Table of Contents
Fetching ...

Evaluating LLMs for Answering Student Questions in Introductory Programming Courses

Thomas Van Mullem, Bart Mesuere, Peter Dawyndt

Abstract

The rapid emergence of Large Language Models (LLMs) presents both opportunities and challenges for programming education. While students increasingly use generative AI tools, direct access often hinders the learning process by providing complete solutions rather than pedagogical hints. Concurrently, educators face significant workload and scalability challenges when providing timely, personalized feedback. This study investigates the capabilities of LLMs to safely and effectively assist educators in answering student questions within a CS1 programming course. To achieve this, we established a rigorous, reproducible evaluation process by curating a benchmark dataset of 170 authentic student questions from a learning management system, paired with ground-truth responses authored by subject matter experts. Because traditional text-matching metrics are insufficient for evaluating open-ended educational responses, we developed and validated a custom LLM-as-a-Judge metric optimized for assessing pedagogical accuracy. Our findings demonstrate that models, such as Gemini 3 flash, can surpass the quality baseline of typical educator responses, achieving high alignment with expert pedagogical standards. To mitigate persistent risks like hallucination and ensure alignment with course-specific context, we advocate for a "teacher-in-the-loop" implementation. Finally, we abstract our methodology into a task-agnostic evaluation framework, advocating for a shift in the development of educational LLM tools from ad-hoc, post-deployment testing to a quantifiable, pre-deployment validation process.

Evaluating LLMs for Answering Student Questions in Introductory Programming Courses

Abstract

The rapid emergence of Large Language Models (LLMs) presents both opportunities and challenges for programming education. While students increasingly use generative AI tools, direct access often hinders the learning process by providing complete solutions rather than pedagogical hints. Concurrently, educators face significant workload and scalability challenges when providing timely, personalized feedback. This study investigates the capabilities of LLMs to safely and effectively assist educators in answering student questions within a CS1 programming course. To achieve this, we established a rigorous, reproducible evaluation process by curating a benchmark dataset of 170 authentic student questions from a learning management system, paired with ground-truth responses authored by subject matter experts. Because traditional text-matching metrics are insufficient for evaluating open-ended educational responses, we developed and validated a custom LLM-as-a-Judge metric optimized for assessing pedagogical accuracy. Our findings demonstrate that models, such as Gemini 3 flash, can surpass the quality baseline of typical educator responses, achieving high alignment with expert pedagogical standards. To mitigate persistent risks like hallucination and ensure alignment with course-specific context, we advocate for a "teacher-in-the-loop" implementation. Finally, we abstract our methodology into a task-agnostic evaluation framework, advocating for a shift in the development of educational LLM tools from ad-hoc, post-deployment testing to a quantifiable, pre-deployment validation process.

Paper Structure

This paper contains 31 sections, 11 figures.

Figures (11)

  • Figure 1: Example of a question asked by a student (Ray Walsh) at the top of the code, followed by the answer provided by one of the educators (Tim Hodkiewicz) at line 11 of the code. Names have been pseudonymized.
  • Figure 2: Ground truth compiled by a subject matter expert; both the identified issue and an answer for the student are provided. This answer is used as ground truth to the question from Figure \ref{['fig:figure1']}.
  • Figure 3: Heatmap showing the difference between SME-scores and the scores assigned by the LLM-as-a-Judge, where scores express the alignment between an actor’s answer and an expert’s reference answer.
  • Figure 4: Q&A task accuracy score of the Gemini 2.5 Flash model related to the input data provided to the model. The baseline is represented by the final and best performing prompt containing all input data except for the line number. '-' indicates a piece of data was left out of the baseline prompt, '+' indicates a piece of data was added.
  • Figure 5: Q&A task accuracy score of different models within the Gemini family performing the Q&A task. Thinking was disabled on all models.
  • ...and 6 more figures