GUIDE: A Guideline-Guided Dataset for Instructional Video Comprehension
Jiafeng Liang, Shixin Jiang, Zekun Wang, Haojie Pan, Zerui Chen, Zheng Chu, Ming Liu, Ruiji Fu, Zhongyuan Wang, Bing Qin
TL;DR
The paper tackles the challenge of instructional video comprehension by introducing GUIDE, a guideline-guided dataset that provides task-level guidelines in addition to per-video step annotations. It builds a three-stage pipeline (video collection, automatic annotation with SP-Generator and GL-Generator, and manual refinement) to produce 560 tasks, 3.5K videos, 15K step segments, and 560 guidelines, enabling three evaluation tasks: Step Captioning, Guideline Summarization, and Guideline-Guided Captioning. Extensive experiments compare video foundation models, language foundation models, and humans, revealing that guidelines substantially improve caption quality and that cross-video guideline mining hinges on solid single-video understanding, with visual encoders identified as a key bottleneck. The results highlight the value of explicit guidelines for learning procedures from open-domain instructional videos and establish GUIDE as a practical benchmark for future research in instructional video comprehension and education-technology applications.
Abstract
There are substantial instructional videos on the Internet, which provide us tutorials for completing various tasks. Existing instructional video datasets only focus on specific steps at the video level, lacking experiential guidelines at the task level, which can lead to beginners struggling to learn new tasks due to the lack of relevant experience. Moreover, the specific steps without guidelines are trivial and unsystematic, making it difficult to provide a clear tutorial. To address these problems, we present the GUIDE (Guideline-Guided) dataset, which contains 3.5K videos of 560 instructional tasks in 8 domains related to our daily life. Specifically, we annotate each instructional task with a guideline, representing a common pattern shared by all task-related videos. On this basis, we annotate systematic specific steps, including their associated guideline steps, specific step descriptions and timestamps. Our proposed benchmark consists of three sub-tasks to evaluate comprehension ability of models: (1) Step Captioning: models have to generate captions for specific steps from videos. (2) Guideline Summarization: models have to mine the common pattern in task-related videos and summarize a guideline from them. (3) Guideline-Guided Captioning: models have to generate captions for specific steps under the guide of guideline. We evaluate plenty of foundation models with GUIDE and perform in-depth analysis. Given the diversity and practicality of GUIDE, we believe that it can be used as a better benchmark for instructional video comprehension.
