Table of Contents
Fetching ...

PhysVid: Physics Aware Local Conditioning for Generative Video Models

Saurabh Pathak, Elahe Arani, Mykola Pechenizkiy, Bahram Zonooz

Abstract

Generative video models achieve high visual fidelity but often violate basic physical principles, limiting reliability in real-world settings. Prior attempts to inject physics rely on conditioning: frame-level signals are domain-specific and short-horizon, while global text prompts are coarse and noisy, missing fine-grained dynamics. We present PhysVid, a physics-aware local conditioning scheme that operates over temporally contiguous chunks of frames. Each chunk is annotated with physics-grounded descriptions of states, interactions, and constraints, which are fused with the global prompt via chunk-aware cross-attention during training. At inference, we introduce negative physics prompts (descriptions of locally relevant law violations) to steer generation away from implausible trajectories. On VideoPhy, PhysVid improves physical commonsense scores by $\approx 33\%$ over baseline video generators, and by up to $\approx 8\%$ on VideoPhy2. These results show that local, physics-aware guidance substantially increases physical plausibility in generative video and marks a step toward physics-grounded video models.

PhysVid: Physics Aware Local Conditioning for Generative Video Models

Abstract

Generative video models achieve high visual fidelity but often violate basic physical principles, limiting reliability in real-world settings. Prior attempts to inject physics rely on conditioning: frame-level signals are domain-specific and short-horizon, while global text prompts are coarse and noisy, missing fine-grained dynamics. We present PhysVid, a physics-aware local conditioning scheme that operates over temporally contiguous chunks of frames. Each chunk is annotated with physics-grounded descriptions of states, interactions, and constraints, which are fused with the global prompt via chunk-aware cross-attention during training. At inference, we introduce negative physics prompts (descriptions of locally relevant law violations) to steer generation away from implausible trajectories. On VideoPhy, PhysVid improves physical commonsense scores by over baseline video generators, and by up to on VideoPhy2. These results show that local, physics-aware guidance substantially increases physical plausibility in generative video and marks a step toward physics-grounded video models.

Paper Structure

This paper contains 39 sections, 4 equations, 14 figures, 5 tables, 1 algorithm.

Figures (14)

  • Figure 1: Videos generated by PhysVid with 1.7 billion parameters, compared to videos generated by Wan-14Bwang2025wan on VideoPhy bansal2025videophy captions. Despite the smaller model size, PhysVid achieves better physical realism in generated videos.
  • Figure 2: Procedure for generating physics-grounded local prompts during data annotation.
  • Figure 3: Architecture of PhysVid showing local information pathways with chunk aware cross-attention. Commonly applied procedures such as tokenization, latent encoding, and decoding are implicit and not shown.
  • Figure 4: Generation of local and counterfactual prompts during inference.
  • Figure 5: VideoPhy PC score by category.
  • ...and 9 more figures