Table of Contents
Fetching ...

MSRL: Scaling Generative Multimodal Reward Modeling via Multi-Stage Reinforcement Learning

Chenglong Wang, Yifu Huo, Yang Gan, Qiaozhi He, Qi Meng, Bei Li, Yan Wang, Junfu Liu, Tianhua Zhou, Jingbo Zhu, Tong Xiao

Abstract

Recent advances in multimodal reward modeling have been largely driven by a paradigm shift from discriminative to generative approaches. Building on this progress, recent studies have further employed reinforcement learning from verifiable rewards (RLVR) to enhance multimodal reward models (MRMs). Despite their success, RLVR-based training typically relies on labeled multimodal preference data, which are costly and labor-intensive to obtain, making it difficult to scale MRM training. To overcome this limitation, we propose a Multi-Stage Reinforcement Learning (MSRL) approach, which can achieve scalable RL for MRMs with limited multimodal data. MSRL replaces the conventional RLVR-based training paradigm by first learning a generalizable reward reasoning capability from large-scale textual preference data, and then progressively transferring this capability to multimodal tasks through caption-based and fully multimodal reinforcement-learning stages. Furthermore, we introduce a cross-modal knowledge distillation approach to improve preference generalization within MSRL. Extensive experiments demonstrate that MSRL effectively scales the RLVR-based training of generative MRMs and substantially improves their performance across both visual understanding and visual generation tasks (e.g., from 66.6% to 75.9% on VL-RewardBench and from 70.2% to 75.7% on GenAI-Bench), without requiring additional multimodal preference annotations. Our code is available at: https://github.com/wangclnlp/MSRL.

MSRL: Scaling Generative Multimodal Reward Modeling via Multi-Stage Reinforcement Learning

Abstract

Recent advances in multimodal reward modeling have been largely driven by a paradigm shift from discriminative to generative approaches. Building on this progress, recent studies have further employed reinforcement learning from verifiable rewards (RLVR) to enhance multimodal reward models (MRMs). Despite their success, RLVR-based training typically relies on labeled multimodal preference data, which are costly and labor-intensive to obtain, making it difficult to scale MRM training. To overcome this limitation, we propose a Multi-Stage Reinforcement Learning (MSRL) approach, which can achieve scalable RL for MRMs with limited multimodal data. MSRL replaces the conventional RLVR-based training paradigm by first learning a generalizable reward reasoning capability from large-scale textual preference data, and then progressively transferring this capability to multimodal tasks through caption-based and fully multimodal reinforcement-learning stages. Furthermore, we introduce a cross-modal knowledge distillation approach to improve preference generalization within MSRL. Extensive experiments demonstrate that MSRL effectively scales the RLVR-based training of generative MRMs and substantially improves their performance across both visual understanding and visual generation tasks (e.g., from 66.6% to 75.9% on VL-RewardBench and from 70.2% to 75.7% on GenAI-Bench), without requiring additional multimodal preference annotations. Our code is available at: https://github.com/wangclnlp/MSRL.

Paper Structure

This paper contains 36 sections, 5 equations, 11 figures, 6 tables.

Figures (11)

  • Figure 1: Illustration of our multi-stage RL approach. Subfigure (a) shows that abundant textual preference data can facilitate scalable RL. Subfigure (b) demonstrates that we can effectively scale multimodal generative reward models through multi-stage RL.
  • Figure 2: An overview of the MSRL approach. We begin by applying RL to large-scale textual preference data (400k examples) to capture rich textual preferences. We then train an RL agent on caption-based data to generalize these preferences to multimodal tasks. During this stage, we also fine-tune the MRMs with CMKD to enhance the generalization. Subsequently, we perform RL with a limited amount of multimodal data to enable adaptation. Note that although the illustration uses image understanding as an example, MSRL is a general approach and can be applied to develop MRMs for arbitrary multimodal tasks.
  • Figure 3: Performance scaling with different amounts of textual preference data on the VL-RewardBench.
  • Figure 4: Template used for the image understanding task.
  • Figure 5: Template used for the image generation task.
  • ...and 6 more figures