Table of Contents
Fetching ...

MotionGrounder: Grounded Multi-Object Motion Transfer via Diffusion Transformer

Samuel Teodoro, Yun Chen, Agus Gunawan, Soo Ye Kim, Jihyong Oh, Munchurl Kim

Abstract

Motion transfer enables controllable video generation by transferring temporal dynamics from a reference video to synthesize a new video conditioned on a target caption. However, existing Diffusion Transformer (DiT)-based methods are limited to single-object videos, restricting fine-grained control in real-world scenes with multiple objects. In this work, we introduce MotionGrounder, a DiT-based framework that firstly handles motion transfer with multi-object controllability. Our Flow-based Motion Signal (FMS) in MotionGrounder provides a stable motion prior for target video generation, while our Object-Caption Alignment Loss (OCAL) grounds object captions to their corresponding spatial regions. We further propose a new Object Grounding Score (OGS), which jointly evaluates (i) spatial alignment between source video objects and their generated counterparts and (ii) semantic consistency between each generated object and its target caption. Our experiments show that MotionGrounder consistently outperforms recent baselines across quantitative, qualitative, and human evaluations.

MotionGrounder: Grounded Multi-Object Motion Transfer via Diffusion Transformer

Abstract

Motion transfer enables controllable video generation by transferring temporal dynamics from a reference video to synthesize a new video conditioned on a target caption. However, existing Diffusion Transformer (DiT)-based methods are limited to single-object videos, restricting fine-grained control in real-world scenes with multiple objects. In this work, we introduce MotionGrounder, a DiT-based framework that firstly handles motion transfer with multi-object controllability. Our Flow-based Motion Signal (FMS) in MotionGrounder provides a stable motion prior for target video generation, while our Object-Caption Alignment Loss (OCAL) grounds object captions to their corresponding spatial regions. We further propose a new Object Grounding Score (OGS), which jointly evaluates (i) spatial alignment between source video objects and their generated counterparts and (ii) semantic consistency between each generated object and its target caption. Our experiments show that MotionGrounder consistently outperforms recent baselines across quantitative, qualitative, and human evaluations.

Paper Structure

This paper contains 38 sections, 16 equations, 17 figures, 16 tables, 1 algorithm.

Figures (17)

  • Figure 1: Overview of MotionGrounder. MotionGrounder transfers motion from the reference videos in (a) to two newly synthesized videos in (b) and (c) with explicit object grounding, enabling object-consistent motion transfer with structural and appearance changes in a training-free, zero-shot manner. Frames in the reference video include color-coded bounding boxes corresponding to objects in the target captions. Please visit our project page (https://kaist-viclab.github.io/motiongrounder-site/) for more results.
  • Figure 2: Overall framework of MotionGrounder. Given a source video $V_S$ with $N$ objects, global caption $c_g$, object captions $\{c_i\}_{i=1}^N$, and corresponding object masks $\{m_i^{1:F}\}_{i=1}^N$, MotionGrounder transfers motion dynamics to a text-defined target video $V_T$. A Flow-based Motion Signal (FMS, Sec. \ref{['sec:fms']}) provides stable motion guidance, while the Object-Caption Alignment Loss (OCAL, Sec. \ref{['sec:ocal']}) enforces spatial grounding between each object caption and its designated object region, enabling training-free multi-object motion transfer.
  • Figure 3: Flow-based Motion Signal (FMS, Sec. \ref{['sec:fms']}). FMS constructs stable latent space patch trajectories from optical flows estimated on sampled source video frames, and uses the resulting displacements to supervise motion transfer during denoising.
  • Figure 4: Object-Caption Alignment Loss (OCAL, Sec. \ref{['sec:ocal']}). OCAL aggregates object-specific attention maps and aligns them with their corresponding object masks to enforce precise spatial grounding of each object caption during generation.
  • Figure 5: Qualitative comparison. For clarity, we show color-coded bounding boxes inferred from object masks, where each color corresponds to an object in the caption. While all other methods suffer from spatial misalignment and motion misattribution, our MotionGrounder generates correct objects, preserves spatial alignment, and maintains object-specific motion across all scenarios.
  • ...and 12 more figures