Table of Contents
Fetching ...

SMASH: Mastering Scalable Whole-Body Skills for Humanoid Ping-Pong with Egocentric Vision

Junli Ren, Yinghui Li, Kai Zhang, Penglin Fu, Haoran Jiang, Yixuan Pan, Guangjun Zeng, Tao Huang, Weizhong Guo, Peng Lu, Tianyu Li, Jingbo Wang, Li Chen, Hongyang Li, Ping Luo

Abstract

Existing humanoid table tennis systems remain limited by their reliance on external sensing and their inability to achieve agile whole-body coordination for precise task execution. These limitations stem from two core challenges: achieving low-latency and robust onboard egocentric perception under fast robot motion, and obtaining sufficiently diverse task-aligned strike motions for learning precise yet natural whole-body behaviors. In this work, we present \methodname, a modular system for agile humanoid table tennis that unifies scalable whole-body skill learning with onboard egocentric perception, eliminating the need for external cameras during deployment. Our work advances prior humanoid table-tennis systems in three key aspects. First, we achieve agile and precise ball interaction with tightly coordinated whole-body control, rather than relying on decoupled upper- and lower-body behaviors. This enables the system to exhibit diverse strike motions, including explosive whole-body smashes and low crouching shots. Second, by augmenting and diversifying strike motions with a generative model, our framework benefits from scalable motion priors and produces natural, robust striking behaviors across a wide workspace. Third, to the best of our knowledge, we demonstrate the first humanoid table-tennis system capable of consecutive strikes using onboard sensing alone, despite the challenges of low-latency perception, ego-motion-induced instability, and limited field of view. Extensive real-world experiments demonstrate stable and precise ball exchanges under high-speed conditions, validating scalable, perception-driven whole-body skill learning for dynamic humanoid interaction tasks.

SMASH: Mastering Scalable Whole-Body Skills for Humanoid Ping-Pong with Egocentric Vision

Abstract

Existing humanoid table tennis systems remain limited by their reliance on external sensing and their inability to achieve agile whole-body coordination for precise task execution. These limitations stem from two core challenges: achieving low-latency and robust onboard egocentric perception under fast robot motion, and obtaining sufficiently diverse task-aligned strike motions for learning precise yet natural whole-body behaviors. In this work, we present \methodname, a modular system for agile humanoid table tennis that unifies scalable whole-body skill learning with onboard egocentric perception, eliminating the need for external cameras during deployment. Our work advances prior humanoid table-tennis systems in three key aspects. First, we achieve agile and precise ball interaction with tightly coordinated whole-body control, rather than relying on decoupled upper- and lower-body behaviors. This enables the system to exhibit diverse strike motions, including explosive whole-body smashes and low crouching shots. Second, by augmenting and diversifying strike motions with a generative model, our framework benefits from scalable motion priors and produces natural, robust striking behaviors across a wide workspace. Third, to the best of our knowledge, we demonstrate the first humanoid table-tennis system capable of consecutive strikes using onboard sensing alone, despite the challenges of low-latency perception, ego-motion-induced instability, and limited field of view. Extensive real-world experiments demonstrate stable and precise ball exchanges under high-speed conditions, validating scalable, perception-driven whole-body skill learning for dynamic humanoid interaction tasks.

Paper Structure

This paper contains 32 sections, 30 equations, 11 figures, 6 tables, 1 algorithm.

Figures (11)

  • Figure 1: SMASH: Our system enables the first outdoor humanoid ping-pong player and the first whole-body smash on a humanoid robot. Through scalable motion generation and whole-body motion matching, the robot achieves expressive and agile ball interaction across a wide hitting workspace.
  • Figure 2: Overview of SMASH. Our system connects scalable motion generation, task-aligned policy learning, and egocentric onboard perception into a unified pipeline for humanoid table tennis. Data: Motion-capture demonstrations are augmented with a motion VAE to build a strike-motion dataset that covers the reachable hitting workspace. Policy: A whole-body policy is trained via reinforcement learning, where task commands are tightly coupled with motion priors through nearest motion matching, enabling the selection and execution of appropriate strike behaviors. Deploy: At test time, egocentric onboard perception provides real-time estimates of ball and robot states, which are used by a planner and matching module to generate closed-loop whole-body actions. This pipeline enables precise and natural ball interaction without relying on external sensing infrastructure.
  • Figure 3: Motion-VAE for scalable strike motion generation.
  • Figure 4: Egocentric onboard perception system. Our perception pipeline combines YOLO-based ball detection, AprilTag-based robot localization, and adaptive Kalman filtering for state estimation. The resulting system provides robust real-time ball trajectory prediction and strike-target estimation using only onboard sensing.
  • Figure 5: Motion-VAE expands strike-space coverage. Compared with the original mocap dataset, Motion-VAE-generated motions produce a much broader distribution of strike targets over the reachable hitting workspace, substantially improving data coverage for downstream policy learning.
  • ...and 6 more figures