Table of Contents
Fetching ...

Seeing but Not Thinking: Routing Distraction in Multimodal Mixture-of-Experts

Haolei Xu, Haiwen Hong, Hongxing Li, Rui Zhou, Yang Zhang, Longtao Huang, Hui Xue, Yongliang Shen, Weiming Lu, Yueting Zhuang

Abstract

Multimodal Mixture-of-Experts (MoE) models have achieved remarkable performance on vision-language tasks. However, we identify a puzzling phenomenon termed Seeing but Not Thinking: models accurately perceive image content yet fail in subsequent reasoning, while correctly solving identical problems presented as pure text. Through systematic analysis, we first verify that cross-modal semantic sharing exists in MoE architectures, ruling out semantic alignment failure as the sole explanation. We then reveal that visual experts and domain experts exhibit layer-wise separation, with image inputs inducing significant routing divergence from text inputs in middle layers where domain experts concentrate. Based on these findings, we propose the Routing Distraction hypothesis: when processing visual inputs, the routing mechanism fails to adequately activate task-relevant reasoning experts. To validate this hypothesis, we design a routing-guided intervention method that enhances domain expert activation. Experiments on three multimodal MoE models across six benchmarks demonstrate consistent improvements, with gains of up to 3.17% on complex visual reasoning tasks. Our analysis further reveals that domain expert identification locates cognitive functions rather than sample-specific solutions, enabling effective transfer across tasks with different information structures.

Seeing but Not Thinking: Routing Distraction in Multimodal Mixture-of-Experts

Abstract

Multimodal Mixture-of-Experts (MoE) models have achieved remarkable performance on vision-language tasks. However, we identify a puzzling phenomenon termed Seeing but Not Thinking: models accurately perceive image content yet fail in subsequent reasoning, while correctly solving identical problems presented as pure text. Through systematic analysis, we first verify that cross-modal semantic sharing exists in MoE architectures, ruling out semantic alignment failure as the sole explanation. We then reveal that visual experts and domain experts exhibit layer-wise separation, with image inputs inducing significant routing divergence from text inputs in middle layers where domain experts concentrate. Based on these findings, we propose the Routing Distraction hypothesis: when processing visual inputs, the routing mechanism fails to adequately activate task-relevant reasoning experts. To validate this hypothesis, we design a routing-guided intervention method that enhances domain expert activation. Experiments on three multimodal MoE models across six benchmarks demonstrate consistent improvements, with gains of up to 3.17% on complex visual reasoning tasks. Our analysis further reveals that domain expert identification locates cognitive functions rather than sample-specific solutions, enabling effective transfer across tasks with different information structures.

Paper Structure

This paper contains 54 sections, 6 equations, 7 figures, 7 tables.

Figures (7)

  • Figure 1: Illustration of the Seeing but Not Thinking phenomenon. See Appendix \ref{['app:case_study']} for details.
  • Figure 2: Overview of our work. We first conduct cross-modal concept intervention to verify semantic sharing in MoE architectures (left, §\ref{['sec:semantic_sharing']}), then identify domain experts by comparing activation frequencies on domain-specific versus general data (middle, §\ref{['sec:expert_specialization']}), and finally analyze routing divergence across modalities and apply routing guidance to enhance domain expert activation (right, §\ref{['sec:routing_divergence']}-§\ref{['sec:method']}).
  • Figure 3: Analysis of routing mechanisms in multimodal MoE models. (a) Cross-modal semantic sharing verification showing inverted U-shaped intervention success rates. (b) Expert specialization quantification using Gini coefficients. (c) Routing divergence across modalities for three image versions.
  • Figure 4: Layer-wise distribution of domain experts and visual experts. Left: Heatmap showing activation frequency differences (red: higher on math data; blue: higher on general data), with deep red concentrated in layers 6–42. Right: Expert counts per layer, where Overlap indicates experts identified as both math and visual experts.
  • Figure 5: Effect of enhancement coefficient $\lambda$ on reasoning accuracy gains across three models.
  • ...and 2 more figures