Table of Contents
Fetching ...

From Impact to Insight: Dynamics-Aware Proprioceptive Terrain Sensing on Granular Media

Yifeng Zhang, Yue Wu, Jake Futterman, Jacob Meseha, Eduardo Rosales, Irie Cooper, J. Diego Caporale, Feifei Qian

Abstract

Robots that traverse natural terrain must interpret contact forces generated under highly dynamic conditions. However, most terrain characterization approaches rely on quasi-static assumptions that neglect velocity- and acceleration-dependent effects arising during impact and rapid stance transitions. In this work, we investigate granular terrain interaction during high-speed hopping and develop a physics-based framework for dynamic terrain characterization using proprioceptive sensing alone. Through controlled hopping experiments with systematically varied impact speed and leg compliance, our measurements reveal that quasi-static based assumptions lead to large discrepancies in granular terrain property estimation during high-speed hopping, particularly upon touchdown and controller-induced stiffness transitions. Velocity-dependent drag alone cannot explain these discrepancies. Instead, acceleration-dependent added-mass effects-associated with grain entrainment beneath the foot-dominate transient force responses. We integrate this force decomposition with a momentum-observer-based estimator that compensates for rigid-body inertia and gravity, and introduce an acceleration-aware weighted regression to account for increased force variance during high-acceleration events. Together, these methods enable consistent recovery of granular stiffness parameters across locomotion conditions, closely matching linear-actuator ground truth. Our results demonstrate that accurate terrain inference during high-speed locomotion requires explicit treatment of acceleration-dependent granular effects, and provide a foundation for robots to characterize complex deformable terrain during dynamic exploration of terrestrial and planetary environments.

From Impact to Insight: Dynamics-Aware Proprioceptive Terrain Sensing on Granular Media

Abstract

Robots that traverse natural terrain must interpret contact forces generated under highly dynamic conditions. However, most terrain characterization approaches rely on quasi-static assumptions that neglect velocity- and acceleration-dependent effects arising during impact and rapid stance transitions. In this work, we investigate granular terrain interaction during high-speed hopping and develop a physics-based framework for dynamic terrain characterization using proprioceptive sensing alone. Through controlled hopping experiments with systematically varied impact speed and leg compliance, our measurements reveal that quasi-static based assumptions lead to large discrepancies in granular terrain property estimation during high-speed hopping, particularly upon touchdown and controller-induced stiffness transitions. Velocity-dependent drag alone cannot explain these discrepancies. Instead, acceleration-dependent added-mass effects-associated with grain entrainment beneath the foot-dominate transient force responses. We integrate this force decomposition with a momentum-observer-based estimator that compensates for rigid-body inertia and gravity, and introduce an acceleration-aware weighted regression to account for increased force variance during high-acceleration events. Together, these methods enable consistent recovery of granular stiffness parameters across locomotion conditions, closely matching linear-actuator ground truth. Our results demonstrate that accurate terrain inference during high-speed locomotion requires explicit treatment of acceleration-dependent granular effects, and provide a foundation for robots to characterize complex deformable terrain during dynamic exploration of terrestrial and planetary environments.

Paper Structure

This paper contains 18 sections, 15 equations, 6 figures.

Figures (6)

  • Figure 1: Locomotion regimes on deformable terrain and their implications for proprioceptive sensing. (a) A quadruped robot navigating natural deformable terrain with spatially-varying strength and mechanics. (b) Dynamic leg-terrain interaction from a representative stride. (c) Locomotion regimes spanning quasi-static gaits (higher sensing fidelity) to dynamic gaits (higher speed), highlighting the trade-off between terrain inference accuracy and locomotion performance.
  • Figure 2: (a) Robotic hopper. (b) Load cell instrumented linear actuator for ground truth measurements. (c) Hopper free body diagram. (d) SLIP controller state machine. (e) State estimation pipeline.
  • Figure 3: Representative dynamic hopping stride on granular medium. (a) Snapshots of the hopper during a representative stride: initial compression after touchdown, maximum compression, deeper penetration induced by the stiffer extension phase, and post-liftoff. (b) Virtual leg length, defined as the body–foot height difference, measured by motor encoder. TD: touchdown; CE: compression–extension transition; LO: liftoff. (c, d) Body and foot height and velocity, respectively, measured via motion capture (MoCap). (e) Body and foot acceleration measured by onboard IMUs. (f) Quasi-static proprioceptive estimate \ref{['eqn:quasi_static']} compared with load-cell ground-truth measurements.
  • Figure 4: Terrain force decomposition under high-speed intrusion. (a) Force–depth with linear fit. (b) Intrusion speed–depth. (c) Depth–speed force map from constant-speed linear-actuator intrusions; gray: representative trials, cyan: hopper trajectory projected for prediction. (d) Measured force vs map prediction. (e) Foot acceleration. Positive represents downward direction. (f) $F_{\mathrm{res}}$ and added-mass term $m_\mathrm{a}a$.
  • Figure 5: Kinematic state estimation and MO-based force inference. (a-d) Body and foot height (a,b) and velocity (c,d) estimated via the onboard-sensing-based Kalman filter, compared with MoCap measurements. (e) Toe force estimated by the momentum observer using Kalman-filter-estimated kinematics. (f) MO force vs. KF-estimated depth, compared with load-cell force vs. MoCap depth.
  • ...and 1 more figures