Table of Contents
Fetching ...

Functional Force-Aware Retargeting from Virtual Human Demos to Soft Robot Policies

Uksang Yoo, Mengjia Zhu, Evan Pezent, Jom Preechayasomboon, Jean Oh, Jeffrey Ichnowski, Amir Memar, Ben Abbatematteo, Homanga Bharadhwaj, Ashish Deshpande, Harsha Prahlad

Abstract

We introduce SoftAct, a framework for teaching soft robot hands to perform human-like manipulation skills by explicitly reasoning about contact forces. Leveraging immersive virtual reality, our system captures rich human demonstrations, including hand kinematics, object motion, dense contact patches, and detailed contact force information. Unlike conventional approaches that retarget human joint trajectories, SoftAct employs a two-stage, force-aware retargeting algorithm. The first stage attributes demonstrated contact forces to individual human fingers and allocates robot fingers proportionally, establishing a force-balanced mapping between human and robot hands. The second stage performs online retargeting by combining baseline end-effector pose tracking with geodesic-weighted contact refinements, using contact geometry and force magnitude to adjust robot fingertip targets in real time. This formulation enables soft robotic hands to reproduce the functional intent of human demonstrations while naturally accommodating extreme embodiment mismatch and nonlinear compliance. We evaluate SoftAct on a suite of contact-rich manipulation tasks using a custom non-anthropomorphic pneumatic soft robot hand. SoftAct's controller reduces fingertip trajectory tracking RMSE by up to 55 percent and reduces tracking variance by up to 69 percent compared to kinematic and learning-based baselines. At the policy level, SoftAct achieves consistently higher success in zero-shot real-world deployment and in simulation. These results demonstrate that explicitly modeling contact geometry and force distribution is essential for effective skill transfer to soft robotic hands, and cannot be recovered through kinematic imitation alone. Project videos and additional details are available at https://soft-act.github.io/.

Functional Force-Aware Retargeting from Virtual Human Demos to Soft Robot Policies

Abstract

We introduce SoftAct, a framework for teaching soft robot hands to perform human-like manipulation skills by explicitly reasoning about contact forces. Leveraging immersive virtual reality, our system captures rich human demonstrations, including hand kinematics, object motion, dense contact patches, and detailed contact force information. Unlike conventional approaches that retarget human joint trajectories, SoftAct employs a two-stage, force-aware retargeting algorithm. The first stage attributes demonstrated contact forces to individual human fingers and allocates robot fingers proportionally, establishing a force-balanced mapping between human and robot hands. The second stage performs online retargeting by combining baseline end-effector pose tracking with geodesic-weighted contact refinements, using contact geometry and force magnitude to adjust robot fingertip targets in real time. This formulation enables soft robotic hands to reproduce the functional intent of human demonstrations while naturally accommodating extreme embodiment mismatch and nonlinear compliance. We evaluate SoftAct on a suite of contact-rich manipulation tasks using a custom non-anthropomorphic pneumatic soft robot hand. SoftAct's controller reduces fingertip trajectory tracking RMSE by up to 55 percent and reduces tracking variance by up to 69 percent compared to kinematic and learning-based baselines. At the policy level, SoftAct achieves consistently higher success in zero-shot real-world deployment and in simulation. These results demonstrate that explicitly modeling contact geometry and force distribution is essential for effective skill transfer to soft robotic hands, and cannot be recovered through kinematic imitation alone. Project videos and additional details are available at https://soft-act.github.io/.

Paper Structure

This paper contains 49 sections, 18 equations, 7 figures, 5 tables, 1 algorithm.

Figures (7)

  • Figure 1: Design of the pneumatic soft manipulator. The manipulator features a rigid–soft hybrid architecture, with each soft finger actuated by three radially arranged pneumatic chambers. The elastomeric finger body is reinforced with internal rigid structures to enhance controllability and repeatability. Differential pressurization enables planar and out-of-plane bending, while the fingers are mounted on a rigid base attached to a 7-DoF robotic arm.
  • Figure 2: Simulation Setup. We approximate pneumatic pressure–induced bending using internal torque-based actuation applied to a rigid spine, which deforms the surrounding soft finger through distributed virtual spring constraints.
  • Figure 3: Contact-Rich Manipulation Tasks and Demonstration Force Profiles. We evaluate SoftAct on six manipulation tasks (rows): light bulb insertion, light bulb twisting, cup pouring, marker grabbing, bottle unscrewing, and box reorienting. For each task, the left column visualizes the average demonstrated contact-force distribution over the human hand, highlighting task-dependent asymmetries (e.g., thumb-dominant vs. distributed contact). The remaining columns show representative time-lapse frames from a single VR demonstration for the task.
  • Figure 4: Low-level Control Performance. We evaluate trajectory tracking accuracy for a single soft finger executing planar reference trajectories, including square, circular, triangular, and rectangular motions. Each plot shows the desired fingertip trajectory in the $xy$ plane (black curve) and the executed fingertip trajectories produced by different controllers. Colored curves correspond to controller rollouts. Tracking error is computed as the Euclidean distance between the executed position and the reference trajectory at each timestep. The proposed controller produces trajectories that closely follow the reference paths with low bias and variance, whereas baseline controllers exhibit noticeable drift, distortion, and accumulated error.
  • Figure 5: Retargeting Stages. Overview of the two-stage force-aware retargeting pipeline. Stage 1 performs offline force-balanced finger assignment, allocating robot fingers based on demonstrated contact force distribution. Stage 2 performs contact-informed refinement, adjusting fingertip targets using geodesic-weighted contact to align the contact surfaces between the human hand mesh and the soft robot.
  • ...and 2 more figures