Table of Contents
Fetching ...

Bridging Perception and Reasoning: Token Reweighting for RLVR in Multimodal LLMs

Jinda Lu, Junkang Wu, Jinghan Li, Kexin Huang, Shuo Yang, Guoyin Wang, Jiancan Wu, Xiang Wang, Xiangnan He

Abstract

Extending Reinforcement Learning with Verifiable Rewards (RLVR) to multimodal large language models (MLLMs) faces a fundamental challenge: their responses inherently interleave perception-related tokens, which ground visual content, with reasoning-related tokens, which construct reasoning chains. These token types instantiate distinct yet interdependent capacities -- visual grounding and symbolic reasoning -- making isolated optimization insufficient. Through token-level empirical analysis, we demonstrate that optimizing either perception- or reasoning-only tokens consistently underperforms full optimization, underscoring their inherent coupling. To address this, we propose a plug-and-play Token-Reweighting (ToR) strategy that explicitly models this interdependence by identifying critical tokens of both types and dynamically reweighting them during RLVR training. Applied on top of existing methods (e.g., GRPO and DAPO), ToR delivers consistent performance gains across multiple multi-modal reasoning benchmarks, achieving state-of-the-art performance with both accurate visual grounding and coherent reasoning.

Bridging Perception and Reasoning: Token Reweighting for RLVR in Multimodal LLMs

Abstract

Extending Reinforcement Learning with Verifiable Rewards (RLVR) to multimodal large language models (MLLMs) faces a fundamental challenge: their responses inherently interleave perception-related tokens, which ground visual content, with reasoning-related tokens, which construct reasoning chains. These token types instantiate distinct yet interdependent capacities -- visual grounding and symbolic reasoning -- making isolated optimization insufficient. Through token-level empirical analysis, we demonstrate that optimizing either perception- or reasoning-only tokens consistently underperforms full optimization, underscoring their inherent coupling. To address this, we propose a plug-and-play Token-Reweighting (ToR) strategy that explicitly models this interdependence by identifying critical tokens of both types and dynamically reweighting them during RLVR training. Applied on top of existing methods (e.g., GRPO and DAPO), ToR delivers consistent performance gains across multiple multi-modal reasoning benchmarks, achieving state-of-the-art performance with both accurate visual grounding and coherent reasoning.

Paper Structure

This paper contains 25 sections, 12 equations, 20 figures, 5 tables.

Figures (20)

  • Figure 1: MLLM responses typically involve two types of critical tokens: (1) reasoning-related tokens to construct reasoning chains, and (2) perception-related tokens to ground visual content.
  • Figure 2: Performance comparison over the wemath benchmark wemath when optimizing different token types with GRPO deepseekmath. Results across selection ratios 20%, 30%, or 50% show that optimizing either reasoning-only or perception-only tokens underperforms all tokens. Qualitative examples are selected from the best-performing checkpoints.
  • Figure 3: Comparison of perception strength and reasoning uncertainty under different token ratios during GRPO optimization. (a) Reasoning-token-only optimization suffers from limited perception capacity. (b) Perception-token-only optimization is sensitive to high reasoning uncertainty. (c) Token reweighting yields a balanced regime by adaptively trading off perception strength and reasoning uncertainty, where the shaded region indicates effective optimization outcomes. Dashed lines denote the balance where increased perception strength compensates for reasoning uncertainty.
  • Figure 4: Comparison of optimization behaviors under different token selection strategies. Vanilla GRPO optimizes uniformly across all tokens. Reasoning-only and perception-only optimization concentrate on a single token type, leading to imbalanced training. Token reweighting jointly emphasizes reasoning- and perception-related tokens, achieving a more balanced optimization.
  • Figure 5: Distribution of log-probability differences for Qwen-2.5-VL-7B on HallusionBench, used to identify perception-related tokens hallusionbench.
  • ...and 15 more figures