Table of Contents
Fetching ...

Wan-R1: Verifiable-Reinforcement Learning for Video Reasoning

Ming Liu, Yunbei Zhang, Shilong Liu, Liwen Wang, Wensheng Zhang

Abstract

Video generation models produce visually coherent content but struggle with tasks requiring spatial reasoning and multi-step planning. Reinforcement learning (RL) offers a path to improve generalization, but its effectiveness in video reasoning hinges on reward design -- a challenge that has received little systematic study. We investigate this problem by adapting Group Relative Policy Optimization (GRPO) to flow-based video models and training them on maze-solving and robotic navigation tasks. We first show that multimodal reward models fail catastrophically in this setting. To address this, we design verifiable reward functions grounded in objective task metrics. For structured game environments, we introduce a multi-component trajectory reward. For robotic navigation, we propose an embedding-level verifiable reward. Our experiments show that RL fine-tuning with verifiable rewards improves generalization. For example, on complex 3D mazes, our model improves exact match accuracy by 29.1\% over the SFT baseline, and on trap-avoidance tasks by 51.4\%. Our systematic reward analysis reveals that verifiable rewards are critical for stable training, while multimodal reward models could lead to degenerate solutions. These findings establish verifiable reward design as a key enabler for robust video reasoning. Code will be publicly available.

Wan-R1: Verifiable-Reinforcement Learning for Video Reasoning

Abstract

Video generation models produce visually coherent content but struggle with tasks requiring spatial reasoning and multi-step planning. Reinforcement learning (RL) offers a path to improve generalization, but its effectiveness in video reasoning hinges on reward design -- a challenge that has received little systematic study. We investigate this problem by adapting Group Relative Policy Optimization (GRPO) to flow-based video models and training them on maze-solving and robotic navigation tasks. We first show that multimodal reward models fail catastrophically in this setting. To address this, we design verifiable reward functions grounded in objective task metrics. For structured game environments, we introduce a multi-component trajectory reward. For robotic navigation, we propose an embedding-level verifiable reward. Our experiments show that RL fine-tuning with verifiable rewards improves generalization. For example, on complex 3D mazes, our model improves exact match accuracy by 29.1\% over the SFT baseline, and on trap-avoidance tasks by 51.4\%. Our systematic reward analysis reveals that verifiable rewards are critical for stable training, while multimodal reward models could lead to degenerate solutions. These findings establish verifiable reward design as a key enabler for robust video reasoning. Code will be publicly available.

Paper Structure

This paper contains 64 sections, 13 equations, 8 figures, 8 tables, 1 algorithm.

Figures (8)

  • Figure 1: VLLM fails to provide reliable reward, verified rewards can guarantee correct supervision.
  • Figure 2: Flow-GRPO with verified reward.
  • Figure 3: Target-Bench results. Wan-R1 achieves the highest overall score, with lower displacement errors (ADE, FDE) and miss rate, and higher soft endpoint (SE) and approach consistency (AC) compared to both the base model and Wan-SFT.
  • Figure 4: Test-time scaling performance of Wan-R1 on Irregular Maze tasks. Performance is evaluated across varying numbers of samples ($K \in \{1, 4, 8, 12, 16\}$) and three difficulty levels (Easy, Medium, Hard). RL improves base capability (K=1) while maintaining beneficial scaling with increased sampling.
  • Figure 5: Given the video output along with the evaluation prompt describing maze-solving rules, Qwen2.5-VL-7B-Instruct produces a step-by-step analysis and assigns a reward score. Although the generated video exhibits visible glitches and degraded visual quality, the model assigns a perfect score of 1.0. This illustrates the core vulnerability of VLM-based reward models: rather than detecting actual visual artifacts, the model hallucinates quality based on high-level semantic cues---a visible agent, a recognizable maze structure, and apparent goal-directed motion---and incorrectly concludes that the video contains "no glitches, noise, or artifacts."
  • ...and 3 more figures