Table of Contents
Fetching ...

Moiré Video Authentication: A Physical Signature Against AI Video Generation

Yuan Qing, Kunyu Zheng, Lingxiao Li, Boqing Gong, Chang Xiao

Abstract

Recent advances in video generation have made AI-synthesized content increasingly difficult to distinguish from real footage. We propose a physics-based authentication signature that real cameras produce naturally, but that generative models cannot faithfully reproduce. Our approach exploits the Moiré effect: the interference fringes formed when a camera views a compact two-layer grating structure. We derive the Moiré motion invariant, showing that fringe phase and grating image displacement are linearly coupled by optical geometry, independent of viewing distance and grating structure. A verifier extracts both signals from video and tests their correlation. We validate the invariant on both real-captured and AI-generated videos from multiple state-of-the-art generators, and find that real and AI-generated videos produce significantly different correlation signatures, suggesting a robust means of differentiating them. Our work demonstrates that deterministic optical phenomena can serve as physically grounded, verifiable signatures against AI-generated video.

Moiré Video Authentication: A Physical Signature Against AI Video Generation

Abstract

Recent advances in video generation have made AI-synthesized content increasingly difficult to distinguish from real footage. We propose a physics-based authentication signature that real cameras produce naturally, but that generative models cannot faithfully reproduce. Our approach exploits the Moiré effect: the interference fringes formed when a camera views a compact two-layer grating structure. We derive the Moiré motion invariant, showing that fringe phase and grating image displacement are linearly coupled by optical geometry, independent of viewing distance and grating structure. A verifier extracts both signals from video and tests their correlation. We validate the invariant on both real-captured and AI-generated videos from multiple state-of-the-art generators, and find that real and AI-generated videos produce significantly different correlation signatures, suggesting a robust means of differentiating them. Our work demonstrates that deterministic optical phenomena can serve as physically grounded, verifiable signatures against AI-generated video.

Paper Structure

This paper contains 40 sections, 10 equations, 14 figures, 2 tables.

Figures (14)

  • Figure 1: (a) The Moiré effect, created by overlaying two repetitive layers. (b) Our prototype Moiré signature assembly. Viewing the assembly from slightly different camera angles (left vs. right) causes the fringes to shift in phase according to deterministic physical laws. (c) In a real video, Moiré fringes appear naturally and shift predictably with camera movement. (d) In an AI-generated video, the fringes distort, and their phase shifts do not adhere to the underlying physics.
  • Figure 2: (Left) Our proposed Moiré-based signature requires only a passive 2-layer Moiré structure (e.g., worn as a badge), where standard camera movement naturally captures phase shifts. (Right) Active structured light signatures schwartz2025verilightmichael2025noise, while also using physical signals for authentication, require specialized external emitter hardware to project patterns onto the scene.
  • Figure 3: (a) Exploded view of the grating assembly, showing its three constituent layers: a flat acrylic base, a printed grating, and a lenticular lens. (b) Two distinct video frames illustrating our algorithm detecting the ArUco markers to isolate the Moiré fringe region. (c) The extracted Moiré fringes after a canonical transformation, corresponding to the frames in (b). The phase shift of the fringes between the two frames is clearly visible.
  • Figure 4: Distribution of Pearson correlation coefficients $|\rho|$ across all video categories. Real recordings (indoor and outdoor) cluster at high correlation, physics-based renderings achieve near-perfect correlation, and AI-generated videos concentrate at markedly lower values.
  • Figure 5: Examples of pure Text-to-Video (T2V) generation failures. The models struggle to synthesize accurate, rigid Moiré patterns from scratch, frequently resulting in unnaturally thick stripes or severely wobbly, non-physical deformations.
  • ...and 9 more figures