Table of Contents
Fetching ...

Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

Dominik Glandorf, Fares Fawzi, Tanja Käser

Abstract

Learners' use of video controls in educational videos provides implicit signals of cognitive processing and instructional design quality, yet the lack of scalable and explainable predictive models limits instructors' ability to anticipate such behavior before deployment. We propose a scalable, interpretable pipeline for predicting population-level watching, pausing, skipping, and rewinding behavior as proxies for cognitive load from video content alone. Our approach leverages multimodal large language models (MLLMs) to compute embeddings of short video segments and trains a neural classifier to identify temporally fine-grained interaction peaks. Drawing from multimedia learning theory on instructional design for optimal cognitive load, we code features of the video segments using GPT-5 and employ them as a basis for interpreting model predictions via concept activation vectors. We evaluate our pipeline on 77 million video control events from 66 online courses. Our findings demonstrate that classifiers based on MLLM embeddings reliably predict interaction peaks, generalize to unseen academic fields, and encode interpretable, theory-relevant instructional concepts. Overall, our results show the feasibility of cost-efficient, interpretable pre-screening of educational video design and open new opportunities to empirically examine multimedia learning theory at scale.

Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

Abstract

Learners' use of video controls in educational videos provides implicit signals of cognitive processing and instructional design quality, yet the lack of scalable and explainable predictive models limits instructors' ability to anticipate such behavior before deployment. We propose a scalable, interpretable pipeline for predicting population-level watching, pausing, skipping, and rewinding behavior as proxies for cognitive load from video content alone. Our approach leverages multimodal large language models (MLLMs) to compute embeddings of short video segments and trains a neural classifier to identify temporally fine-grained interaction peaks. Drawing from multimedia learning theory on instructional design for optimal cognitive load, we code features of the video segments using GPT-5 and employ them as a basis for interpreting model predictions via concept activation vectors. We evaluate our pipeline on 77 million video control events from 66 online courses. Our findings demonstrate that classifiers based on MLLM embeddings reliably predict interaction peaks, generalize to unseen academic fields, and encode interpretable, theory-relevant instructional concepts. Overall, our results show the feasibility of cost-efficient, interpretable pre-screening of educational video design and open new opportunities to empirically examine multimedia learning theory at scale.

Paper Structure

This paper contains 14 sections, 5 figures, 5 tables.

Figures (5)

  • Figure 1: Pipeline for predicting learners’ interactions with online learning videos and explaining predictions from multimedia learning theory.
  • Figure 2: Three modalities of video segments around $t$ are encoded by pre-trained transformers. A neural classifier predicts if ${\text{Signal}}_v(t)$ is among the top K% at timepoint $t$.
  • Figure 3: AUC ($\pm$ std, 5 seeds) for prediction of top 5% learner-video interaction moments on fields unseen during training as a measure of generalization of our classifier.
  • Figure 4: Learner-video interactions at 6,000 video moments (50% from top 5% ranks) by CTML features coded by GPT-5 (inter-rater agreement in the right panel). For example, moments without a formula have an average ${\text{PausedAt}}_v(t)$ rank of 67% within their video, whereas moments with a formula have a higher average rank of 71%.
  • Figure 5: TCAV values of CTML concepts and activations in our classifier $H$. Significant (*) values above 0.5 mean that the classifier was positively sensitive to the concept present in the activation space.