Table of Contents
Fetching ...

Reinforcing Structured Chain-of-Thought for Video Understanding

Peiyao Wang, Haotian Xu, Noranart Vesdapunt, Rui Hou, Jingyi Zhang, Haibin Ling, Oleksandr Obiednikov, Ning Zhou, Kah Kuen Fu

Abstract

Multi-modal Large Language Models (MLLMs) show promise in video understanding. However, their reasoning often suffers from thinking drift and weak temporal comprehension, even when enhanced by Reinforcement Learning (RL) techniques like Group Relative Policy Optimization (GRPO). Moreover, existing RL methods usually depend on Supervised Fine-Tuning (SFT), which requires costly Chain-of-Thought (CoT) annotation and multi-stage training, and enforces fixed reasoning paths, limiting MLLMs' ability to generalize and potentially inducing bias. To overcome these limitations, we introduce Summary-Driven Reinforcement Learning (SDRL), a novel single-stage RL framework that obviates the need for SFT by utilizing a Structured CoT format: Summarize -> Think -> Answer. SDRL introduces two self-supervised mechanisms integrated into the GRPO objective: 1) Consistency of Vision Knowledge (CVK) enforces factual grounding by reducing KL divergence among generated summaries; and 2) Dynamic Variety of Reasoning (DVR) promotes exploration by dynamically modulating thinking diversity based on group accuracy. This novel integration effectively balances alignment and exploration, supervising both the final answer and the reasoning process. Our method achieves state-of-the-art performance on seven public VideoQA datasets.

Reinforcing Structured Chain-of-Thought for Video Understanding

Abstract

Multi-modal Large Language Models (MLLMs) show promise in video understanding. However, their reasoning often suffers from thinking drift and weak temporal comprehension, even when enhanced by Reinforcement Learning (RL) techniques like Group Relative Policy Optimization (GRPO). Moreover, existing RL methods usually depend on Supervised Fine-Tuning (SFT), which requires costly Chain-of-Thought (CoT) annotation and multi-stage training, and enforces fixed reasoning paths, limiting MLLMs' ability to generalize and potentially inducing bias. To overcome these limitations, we introduce Summary-Driven Reinforcement Learning (SDRL), a novel single-stage RL framework that obviates the need for SFT by utilizing a Structured CoT format: Summarize -> Think -> Answer. SDRL introduces two self-supervised mechanisms integrated into the GRPO objective: 1) Consistency of Vision Knowledge (CVK) enforces factual grounding by reducing KL divergence among generated summaries; and 2) Dynamic Variety of Reasoning (DVR) promotes exploration by dynamically modulating thinking diversity based on group accuracy. This novel integration effectively balances alignment and exploration, supervising both the final answer and the reasoning process. Our method achieves state-of-the-art performance on seven public VideoQA datasets.

Paper Structure

This paper contains 31 sections, 17 equations, 14 figures, 7 tables.

Figures (14)

  • Figure 1: Performance comparison and training paradigms for video reasoning models. (a) Performance comparison among several benchmarks. (b)-(d) Training paradigm analysis: (b) Pure RL often yields ungrounded and unstable Chain-of-Thought (CoT) outputs. (c) SFT+RL is costly and complex. In contrast, (d) SDRL (Summary-driven RL) utilizes a Structured CoT and self-supervision to achieve stable and grounded video reasoning.
  • Figure 2: Overview of the SDRL Framework, which introduces Structured Chain-of-Thought. The Policy Model ($\pi_{\theta}$) generates $G$ reasoning sequences, each structured as Summary, Think, and Answer. The framework introduces two structured objectives implemented via token-wise weight: (a) Consistency of Vision Knowledge (CVK) and (b) Dynamic Variety of Reasoning (DVR). These structured weights, along with standard rewards (Accuracy, Format), are combined to derive the group advantage for policy optimization.
  • Figure 3: BLEU and sBERT scores between different predictions.
  • Figure 4: Accuracy comparison across different tag types under training-free, inference-only settings for selecting an appropriate structural format prior to RL optimization.
  • Figure 5: Comparison of CoTs and final answers generated by VideoChat-R1 and our proposed SDRL method. SDRL demonstrates superior grounding and logic flow, evidenced by higher BLEU and sBERT scores relative to the ground truth summary.
  • ...and 9 more figures