Table of Contents
Fetching ...

LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model

Quankai Gao, Jiawei Yang, Qiangeng Xu, Le Chen, Yue Wang

Abstract

Learning human-object manipulation presents significant challenges due to its fine-grained and contact-rich nature of the motions involved. Traditional physics-based animation requires extensive modeling and manual setup, and more importantly, it neither generalizes well across diverse object morphologies nor scales effectively to real-world environment. To address these limitations, we introduce LOME, an egocentric world model that can generate realistic human-object interactions as videos conditioned on an input image, a text prompt, and per-frame human actions, including both body poses and hand gestures. LOME injects strong and precise action guidance into object manipulation by jointly estimating spatial human actions and the environment contexts during training. After finetuning a pretrained video generative model on videos of diverse egocentric human-object interactions, LOME demonstrates not only high action-following accuracy and strong generalization to unseen scenarios, but also realistic physical consequences of hand-object interactions, e.g., liquid flowing from a bottle into a mug after executing a ``pouring'' action. Extensive experiments demonstrate that our video-based framework significantly outperforms state-of-the-art image based and video-based action-conditioned methods and Image/Text-to-Video (I/T2V) generative model in terms of both temporal consistency and motion control. LOME paves the way for photorealistic AR/VR experiences and scalable robotic training, without being limited to simulated environments or relying on explicit 3D/4D modeling.

LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model

Abstract

Learning human-object manipulation presents significant challenges due to its fine-grained and contact-rich nature of the motions involved. Traditional physics-based animation requires extensive modeling and manual setup, and more importantly, it neither generalizes well across diverse object morphologies nor scales effectively to real-world environment. To address these limitations, we introduce LOME, an egocentric world model that can generate realistic human-object interactions as videos conditioned on an input image, a text prompt, and per-frame human actions, including both body poses and hand gestures. LOME injects strong and precise action guidance into object manipulation by jointly estimating spatial human actions and the environment contexts during training. After finetuning a pretrained video generative model on videos of diverse egocentric human-object interactions, LOME demonstrates not only high action-following accuracy and strong generalization to unseen scenarios, but also realistic physical consequences of hand-object interactions, e.g., liquid flowing from a bottle into a mug after executing a ``pouring'' action. Extensive experiments demonstrate that our video-based framework significantly outperforms state-of-the-art image based and video-based action-conditioned methods and Image/Text-to-Video (I/T2V) generative model in terms of both temporal consistency and motion control. LOME paves the way for photorealistic AR/VR experiences and scalable robotic training, without being limited to simulated environments or relying on explicit 3D/4D modeling.

Paper Structure

This paper contains 16 sections, 7 equations, 14 figures, 4 tables.

Figures (14)

  • Figure 1: Training pipeline of LOME. A pretrained VAE encoder $\mathcal{E}$ maps the reference image $I$, input video $V$, and rasterized 2D action maps $\hat{A}$ to latent representations. A camera adapter encodes per-frame ray maps into camera features, which are added to the video latents. A Diffusion Transformer (DiT), conditioned on a text prompt, denoises the concatenated noisy action and video latents, and a pretrained decoder $\mathcal{D}$ reconstructs the generated video.
  • Figure 2: Action conditioning at frame $i$. (a) 3D human pose during video capture. (b) Projected 2D human pose $A_i$ after filtering out keypoints and skeleton segments outside the camera frustum. (c) Background-masked rasterized 2D action map $\hat{A}_i$ used as the conditioning signal.
  • Figure 3: Qualitative action-following comparison across tasks. We compare LOME (ours) with CoSHAND, Wan-I2V and Go-with-Flow (GwtF) on diverse human-object manipulations. “Action” denotes our 2D action maps; CoSHAND uses its own hand masks; Wan-I2V uses no action condition; GwtF uses GT optical flow as action condition. Text prompts are overlaid on the ground-truth (GT) frames.
  • Figure 4: Pouring example. We compare LOME (ours), CoSHAND, Wan-I2V and Go-with-Flow (GwtF) on a "pouring liquid" task. Only LOME produces coherent liquid dynamics with a steadily increasing liquid level consistent with the text instruction. The prompt is overlaid on the GT frames.
  • Figure 5: Temporal resampling to align text and motion. We propose to resample clips with the varying number of frames to a fixed length ($i.e.$ 6 frames). (a) Longer clips are uniformly downsampled while preserving the first and last frames. (b) Shorter clips are upsampled by back-and-forth resampling to reach the target length.
  • ...and 9 more figures