Table of Contents
Fetching ...

Can LLMs Reason About Attention? Towards Zero-Shot Analysis of Multimodal Classroom Behavior

Nolan Platt, Sehrish Nizamani, Alp Tural, Elif Tural, Saad Nizamani, Andrew Katz, Yoonje Lee, Nada Basit

Abstract

Understanding student engagement usually requires time-consuming manual observation or invasive recording that raises privacy concerns. We present a privacy-preserving pipeline that analyzes classroom videos to extract insights about student attention, without storing any identifiable footage. Our system runs on a single GPU, using OpenPose for skeletal extraction and Gaze-LLE for visual attention estimation. Original video frames are deleted immediately after pose extraction, thus only geometric coordinates (stored as JSON) are retained, ensuring compliance with FERPA. The extracted pose and gaze data is processed by QwQ-32B-Reasoning, which performs zero-shot analysis of student behavior across lecture segments. Instructors access results through a web dashboard featuring attention heatmaps and behavioral summaries. Our preliminary findings suggest that LLMs may show promise for multimodal behavior understanding, although they still struggle with spatial reasoning about classroom layouts. We discuss these limitations and outline directions for improving LLM spatial comprehension in educational analytics contexts.

Can LLMs Reason About Attention? Towards Zero-Shot Analysis of Multimodal Classroom Behavior

Abstract

Understanding student engagement usually requires time-consuming manual observation or invasive recording that raises privacy concerns. We present a privacy-preserving pipeline that analyzes classroom videos to extract insights about student attention, without storing any identifiable footage. Our system runs on a single GPU, using OpenPose for skeletal extraction and Gaze-LLE for visual attention estimation. Original video frames are deleted immediately after pose extraction, thus only geometric coordinates (stored as JSON) are retained, ensuring compliance with FERPA. The extracted pose and gaze data is processed by QwQ-32B-Reasoning, which performs zero-shot analysis of student behavior across lecture segments. Instructors access results through a web dashboard featuring attention heatmaps and behavioral summaries. Our preliminary findings suggest that LLMs may show promise for multimodal behavior understanding, although they still struggle with spatial reasoning about classroom layouts. We discuss these limitations and outline directions for improving LLM spatial comprehension in educational analytics contexts.

Paper Structure

This paper contains 9 sections, 2 figures.

Figures (2)

  • Figure 1: Privacy-preserving vision processing. Original classroom video (far left) is split into individual frames, with each frame subject to a blur filtering algorithm for face anonymization. OpenPose then extracts 25 skeletal keypoints per person (middle), and Gaze-LLE estimates visual attention vectors (right). Original frames are then permanently deleted.
  • Figure 2: A bar graph showing representative change in posture throughout a lecture's video, showing periods of time where students leaned forward, were sleeping, slouching, standing, etc. Unknown classifications include occlusion and low-confidence keypoints. High unknown rates are a known limitation that is being addressed.