Table of Contents
Fetching ...

ActiveGlasses: Learning Manipulation with Active Vision from Ego-centric Human Demonstration

Yanwen Zou, Chenyang Shi, Wenye Yu, Han Xue, Jun Lv, Ye Pan, Chuan Wen, Cewu Lu

Abstract

Large-scale real-world robot data collection is a prerequisite for bringing robots into everyday deployment. However, existing pipelines often rely on specialized handheld devices to bridge the embodiment gap, which not only increases operator burden and limits scalability, but also makes it difficult to capture the naturally coordinated perception-manipulation behaviors of human daily interaction. This challenge calls for a more natural system that can faithfully capture human manipulation and perception behaviors while enabling zero-shot transfer to robotic platforms. We introduce ActiveGlasses, a system for learning robot manipulation from ego-centric human demonstrations with active vision. A stereo camera mounted on smart glasses serves as the sole perception device for both data collection and policy inference: the operator wears it during bare-hand demonstrations, and the same camera is mounted on a 6-DoF perception arm during deployment to reproduce human active vision. To enable zero-transfer, we extract object trajectories from demonstrations and use an object-centric point-cloud policy to jointly predict manipulation and head movement. Across several challenging tasks involving occlusion and precise interaction, ActiveGlasses achieves zero-shot transfer with active vision, consistently outperforms strong baselines under the same hardware setup, and generalizes across two robot platforms.

ActiveGlasses: Learning Manipulation with Active Vision from Ego-centric Human Demonstration

Abstract

Large-scale real-world robot data collection is a prerequisite for bringing robots into everyday deployment. However, existing pipelines often rely on specialized handheld devices to bridge the embodiment gap, which not only increases operator burden and limits scalability, but also makes it difficult to capture the naturally coordinated perception-manipulation behaviors of human daily interaction. This challenge calls for a more natural system that can faithfully capture human manipulation and perception behaviors while enabling zero-shot transfer to robotic platforms. We introduce ActiveGlasses, a system for learning robot manipulation from ego-centric human demonstrations with active vision. A stereo camera mounted on smart glasses serves as the sole perception device for both data collection and policy inference: the operator wears it during bare-hand demonstrations, and the same camera is mounted on a 6-DoF perception arm during deployment to reproduce human active vision. To enable zero-transfer, we extract object trajectories from demonstrations and use an object-centric point-cloud policy to jointly predict manipulation and head movement. Across several challenging tasks involving occlusion and precise interaction, ActiveGlasses achieves zero-shot transfer with active vision, consistently outperforms strong baselines under the same hardware setup, and generalizes across two robot platforms.

Paper Structure

This paper contains 15 sections, 4 equations, 6 figures, 3 tables, 1 algorithm.

Figures (6)

  • Figure 1: ActiveGlasses enables operators to collect manipulation demonstrations with bare hands. The head-mounted glasses record the stereo observations of the current task and the operator’s head movement, and realize zero-shot transfer of manipulation with active vision to robotic platforms. Active vision allows the robot to complete tasks, including occluded scenarios, with only a head camera input.
  • Figure 2: Our system uses XREAL Glasses combined with a ZED Mini stereo camera, enabling egocentric stereo video and 6DoF head movement data collection. During demonstration, the operator actively explores task-relevant regions in the environment and complete manipulation tasks without any hand-held devices. We proposed an object-centric 3D Policy modified from RISEwang2024rise, which predicts future 6-DOF object trajectory in the task space. After training, the policy is deployed zero-shot on a real-world robot. A 6-DoF robotic arm synchronously executes the operator’s head motions during inference, allowing the robot to reproduce active vision.
  • Figure 3: (a) Perception device. A ZED Mini stereo camera is mounted on smart glasses. No additional external or wrist-mounted cameras are used. Considering the gap between the user's and the ZED camera's field of view (FOV), we added a fixed canvas to the Glass UI to indicate the bottom of the current camera view. (b) Comparison of recorded trajectories. Several numerical jumps are observed in the ZED trajectory; therefore, we choose to adopt tracking data from the glasses in training and inference.
  • Figure 4: Task setting. We introduce three tabletop manipulation tasks that require active viewpoint adjustment. In Book Placement, the camera initially faces the side of the bookshelf and must move closer and rotate to observe the empty slot before placing the book. In Occluded Distant Pour Water, the target cup is occluded by a screen, requiring the camera to adjust its viewpoint to perceive the pouring target. In Bread Insertion, the camera must tilt and reorient to observe the toaster slot before accurately inserting the bread. To reflect the operator’s torso movement during data collection, the perception arm is also mounted on a movable wheeled table, and its base position is randomized within a small range at start of each rollout when deployment.
  • Figure 5: Data collection performance. Sigma is reported as 0 g because it operates in a zero-gravity mode.
  • ...and 1 more figures