Table of Contents
Fetching ...

Teacher-Student Diffusion Model for Text-Driven 3D Hand Motion Generation

Ching-Lam Cheng, Bin Zhu, Shengfeng He

Abstract

Generating realistic 3D hand motion from natural language is vital for VR, robotics, and human-computer interaction. Existing methods either focus on full-body motion, overlooking detailed hand gestures, or require explicit 3D object meshes, limiting generality. We propose TSHaMo, a model-agnostic teacher-student diffusion framework for text-driven hand motion generation. The student model learns to synthesize motions from text alone, while the teacher leverages auxiliary signals (e.g., MANO parameters) to provide structured guidance during training. A co-training strategy enables the student to benefit from the teacher's intermediate predictions while remaining text-only at inference. Evaluated using two diffusion backbones on GRAB and H2O, TSHaMo consistently improves motion quality and diversity. Ablations confirm its robustness and flexibility in using diverse auxiliary inputs without requiring 3D objects at test time.

Teacher-Student Diffusion Model for Text-Driven 3D Hand Motion Generation

Abstract

Generating realistic 3D hand motion from natural language is vital for VR, robotics, and human-computer interaction. Existing methods either focus on full-body motion, overlooking detailed hand gestures, or require explicit 3D object meshes, limiting generality. We propose TSHaMo, a model-agnostic teacher-student diffusion framework for text-driven hand motion generation. The student model learns to synthesize motions from text alone, while the teacher leverages auxiliary signals (e.g., MANO parameters) to provide structured guidance during training. A co-training strategy enables the student to benefit from the teacher's intermediate predictions while remaining text-only at inference. Evaluated using two diffusion backbones on GRAB and H2O, TSHaMo consistently improves motion quality and diversity. Ablations confirm its robustness and flexibility in using diverse auxiliary inputs without requiring 3D objects at test time.

Paper Structure

This paper contains 18 sections, 7 equations, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Training procedure of our method. The student takes a noisy sample and text embedding to predict the denoised output, while the teacher also uses auxiliary conditions (e.g., 3D joints, MANO parameters, contact maps). Three losses are applied between model predictions and ground truth. The example here uses the MDM backbone.
  • Figure 2: Ablation study of guidance strengths $\lambda$ using 3D hand joints as condition.
  • Figure 3: Qualitative example comparison for hand motion generation from textual prompt.