Table of Contents
Fetching ...

An AI Teaching Assistant for Motion Picture Engineering

Deirdre O'Regan, Anil C. Kokaram

Abstract

The rapid rise of LLMs over the last few years has promoted growing experimentation with LLM-driven AI tutors. However, the details of implementation, as well as the benefit in a teaching environment, are still in the early days of exploration. This article addresses these issues in the context of implementation of an AI Teaching Assistant (AI-TA) using Retrieval Augmented Generation (RAG) for Trinity College Dublin's Master's Motion Picture Engineering (MPE) course. We provide details of our implementation (including the prompt to the LLM, and code), and highlight how we designed and tuned our RAG pipeline to meet course needs. We describe our survey instrument and report on the impact of the AI-TA through a number of quantitative metrics. The scale of our experiment (43 students, 296 sessions, 1,889 queries over 7 weeks) was sufficient to have confidence in our findings. Unlike previous studies, we experimented with allowing the use of the AI-TA in open-book examinations. Statistical analysis across three exams showed no performance differences regardless of AI-TA access (p > 0.05), demonstrating that thoughtfully designed assessments can maintain academic validity. Student feedback revealed that the AI-TA was beneficial (mean = 4.22/5), while students had mixed feelings about preferring it over human tutoring (mean = 2.78/5).

An AI Teaching Assistant for Motion Picture Engineering

Abstract

The rapid rise of LLMs over the last few years has promoted growing experimentation with LLM-driven AI tutors. However, the details of implementation, as well as the benefit in a teaching environment, are still in the early days of exploration. This article addresses these issues in the context of implementation of an AI Teaching Assistant (AI-TA) using Retrieval Augmented Generation (RAG) for Trinity College Dublin's Master's Motion Picture Engineering (MPE) course. We provide details of our implementation (including the prompt to the LLM, and code), and highlight how we designed and tuned our RAG pipeline to meet course needs. We describe our survey instrument and report on the impact of the AI-TA through a number of quantitative metrics. The scale of our experiment (43 students, 296 sessions, 1,889 queries over 7 weeks) was sufficient to have confidence in our findings. Unlike previous studies, we experimented with allowing the use of the AI-TA in open-book examinations. Statistical analysis across three exams showed no performance differences regardless of AI-TA access (p > 0.05), demonstrating that thoughtfully designed assessments can maintain academic validity. Student feedback revealed that the AI-TA was beneficial (mean = 4.22/5), while students had mixed feelings about preferring it over human tutoring (mean = 2.78/5).

Paper Structure

This paper contains 19 sections, 5 figures, 1 table.

Figures (5)

  • Figure 1: A typical RAG-based AI-TA: Based on the student's query, the AI-TA searches course materials, retrieves relevant context (i.e., excerpts plus source metadata), and forwards the query plus context to an LLM which generates a response grounded in the materials and including source citations.
  • Figure 2: Our AI-TA's implementation architecture and core student workflow. To use this system, create a Project in Microsoft Foundry, and from there: (1) deploy Azure OpenAI models; (2) upload course materials and deploy an Azure AI Search instance to create a Hybrid Index; (3) deploy an Azure Prompt Flow. In Microsoft Azure, deploy: (4) an Azure SQL Database; (5) our custom web application as an Azure Web App. Our web application code, Prompt Flow, Prompt Template, and detailed setup instructions are available on GitHub\ref{['fn:code']}. You will need to adapt our Prompt Flow and Prompt Template for your specific needs.
  • Figure 3: Workflow for ingesting course materials: First, transform all course materials to text-based documents, then using Microsoft Foundry's UI: upload the documents, create a new index, connect the Prompt Flow to the index, and test and deploy the Prompt Flow.
  • Figure 4: AI-TA engagement: Number of MPE student queries per day over 7 weeks. Annotations show correlations between usage peaks and assessment-related events. Overall engagement statistics are displayed above the chart.
  • Figure 5: Student perceptions of the AI-TA from the course exit survey (question texts summarized for brevity). We calculated the mean, $\mu$, and standard deviation, $\sigma$, for Q1–Q5. $N$ denotes the number of respondents to each question (excluding "Not Applicable" responses). Interesting quotes from open-ended Q6 responses are also shown.