Table of Contents
Fetching ...

SHANDS: A Multi-View Dataset and Benchmark for Surgical Hand-Gesture and Error Recognition Toward Medical Training

Le Ma, Thiago Freitas dos Santos, Nadia Magnenat-Thalmann, Katarzyna Wac

Abstract

In surgical training for medical students, proficiency development relies on expert-led skill assessment, which is costly, time-limited, difficult to scale, and its expertise remains confined to institutions with available specialists. Automated AI-based assessment offers a viable alternative, but progress is constrained by the lack of datasets containing realistic trainee errors and the multi-view variability needed to train robust computer vision approaches. To address this gap, we present Surgical-Hands (SHands), a large-scale multi-view video dataset for surgical hand-gesture and error recognition for medical training. \textsc{SHands} captures linear incision and suturing using five RGB cameras from complementary viewpoints, performed by 52 participants (20 experts and 32 trainees), each completing three standardized trials per procedure. The videos are annotated at the frame level with 15 gesture primitives and include a validated taxonomy of 8 trainee error types, enabling both gesture recognition and error detection. We further define standardized evaluation protocols for single-view, multi-view, and cross-view generalization, and benchmark state-of-the-art deep learning models on the dataset. SHands is publicly released to support the development of robust and scalable AI systems for surgical training grounded in clinically curated domain knowledge.

SHANDS: A Multi-View Dataset and Benchmark for Surgical Hand-Gesture and Error Recognition Toward Medical Training

Abstract

In surgical training for medical students, proficiency development relies on expert-led skill assessment, which is costly, time-limited, difficult to scale, and its expertise remains confined to institutions with available specialists. Automated AI-based assessment offers a viable alternative, but progress is constrained by the lack of datasets containing realistic trainee errors and the multi-view variability needed to train robust computer vision approaches. To address this gap, we present Surgical-Hands (SHands), a large-scale multi-view video dataset for surgical hand-gesture and error recognition for medical training. \textsc{SHands} captures linear incision and suturing using five RGB cameras from complementary viewpoints, performed by 52 participants (20 experts and 32 trainees), each completing three standardized trials per procedure. The videos are annotated at the frame level with 15 gesture primitives and include a validated taxonomy of 8 trainee error types, enabling both gesture recognition and error detection. We further define standardized evaluation protocols for single-view, multi-view, and cross-view generalization, and benchmark state-of-the-art deep learning models on the dataset. SHands is publicly released to support the development of robust and scalable AI systems for surgical training grounded in clinically curated domain knowledge.

Paper Structure

This paper contains 19 sections, 3 figures, 7 tables.

Figures (3)

  • Figure 1: The SHands multi-view dataset. A five-camera RGB setup (C1–C5) records synchronized multi-view videos of incision and suturing tasks on ex vivo tissue. The top row shows a gesture-annotated timeline with fine-grained labels (I1–I5 for incision, S1–S10 for suturing) and transition boundaries. The bottom rows show aligned frames from all views for both incision and suturing, highlighting the complementary spatial information captured across cameras.
  • Figure 2: Overview of dataset composition and annotation coverage. The pie chart on the left illustrates the distribution of total recording time contributed by medical trainees (90.9%) and surgeons (9.1%). The middle plots report the proportion of annotated footage, showing 92% labeled coverage for surgeons and 55% for trainees. The bar plots on the right present gesture distribution for surgeons (top) and trainees (bottom), illustrating variability across incision (I1–I5) and suturing (S1–S10) categories, as well as error types (II1–II3, IS1–IS5).
  • Figure 3: Annotation quality analysis. Gesture probability distributions predicted by DVANet around two annotated boundary frames (61 and 109). Sharp probability transitions and narrow uncertainty regions indicate high temporal precision and consistency of the manual annotations, with strong classification confidence on either side of each gesture change.