Table of Contents
Fetching ...

EgoFlow: Gradient-Guided Flow Matching for Egocentric 6DoF Object Motion Generation

Abhishek Saroha, Huajian Zeng, Xingxing Zuo, Daniel Cremers, Xi Wang

Abstract

Understanding and predicting object motion from egocentric video is fundamental to embodied perception and interaction. However, generating physically consistent 6DoF trajectories remains challenging due to occlusions, fast motion, and the lack of explicit physical reasoning in existing generative models. We present EgoFlow, a flow-matching framework that synthesizes realistic and physically plausible trajectories conditioned on multimodal egocentric observations. EgoFlow employs a hybrid Mamba-Transformer-Perceiver architecture to jointly model temporal dynamics, scene geometry, and semantic intent, while a gradient-guided inference process enforces differentiable physical constraints such as collision avoidance and motion smoothness. This combination yields coherent and controllable motion generation without post-hoc filtering or additional supervision. Experiments on real-world datasets HD-EPIC, EgoExo4D, and HOT3D show that EgoFlow outperforms diffusion-based and transformer baselines in accuracy, generalization, and physical realism, reducing collision rates by up to 79%, and strong generalization to unseen scenes. Our results highlight the promise of flow-based generative modeling for scalable and physically grounded egocentric motion understanding.

EgoFlow: Gradient-Guided Flow Matching for Egocentric 6DoF Object Motion Generation

Abstract

Understanding and predicting object motion from egocentric video is fundamental to embodied perception and interaction. However, generating physically consistent 6DoF trajectories remains challenging due to occlusions, fast motion, and the lack of explicit physical reasoning in existing generative models. We present EgoFlow, a flow-matching framework that synthesizes realistic and physically plausible trajectories conditioned on multimodal egocentric observations. EgoFlow employs a hybrid Mamba-Transformer-Perceiver architecture to jointly model temporal dynamics, scene geometry, and semantic intent, while a gradient-guided inference process enforces differentiable physical constraints such as collision avoidance and motion smoothness. This combination yields coherent and controllable motion generation without post-hoc filtering or additional supervision. Experiments on real-world datasets HD-EPIC, EgoExo4D, and HOT3D show that EgoFlow outperforms diffusion-based and transformer baselines in accuracy, generalization, and physical realism, reducing collision rates by up to 79%, and strong generalization to unseen scenes. Our results highlight the promise of flow-based generative modeling for scalable and physically grounded egocentric motion understanding.

Paper Structure

This paper contains 21 sections, 11 equations, 9 figures, 8 tables.

Figures (9)

  • Figure 1: EgoFlow overview. Given a 3D scene, a task prompt, and a task goal, our method first fuses multimodal inputs through a scene conditioning block (Sec. \ref{['sec:conditioning']}). The fused features are used as conditioning for trajectory generation. We use input trajectories as the source samples of our flow matching model (Sec. \ref{['subsec:flow_matching']}), which maps the generated trajectories to the target distribution, the ground-truth trajectories, through a hybrid architecture (Sec. \ref{['subsec:architecture']}). We integrate physical guidance at inference to ensure physical plausible and collision-free trajectories (Sec. \ref{['sec:guidance']}).
  • Figure 2: HD-Epic Qualitative Result. The trajectory in green in each image is the history followed by the respective prediction by the various baselines and the ground truth. We can see that not ony our method generates a plausible trajectory to the end goal, it also takes a rather more natural and smooth path to the target pose.
  • Figure 3: Hot3D Qualitative Results. We compare EgoFlow against the established baselines. Our method shows better generalization to unseen conditions and produce geometrically coherent and physically plausible 6DoF trajectories.
  • Figure 4: We project our object position calculated by our object position estimation algorithm using the hand poses of MPS as described in Sec. \ref{['subsec:hdepic']} onto the egocentric video frames to show the correctness and accuracy of our approach.
  • Figure 5: We plot our object position calculation algorithm on the ADT dataset. Since ADT has rich annotations, it works as an ideal demonstration of the effectiveness of our algorithm to generate dense object motions on the HD-EPIC dataset.
  • ...and 4 more figures