Table of Contents
Fetching ...

Routing Sensitivity Without Controllability: A Diagnostic Study of Fairness in MoE Language Models

Junhyeok Lee, Kyu Sung Choi

Abstract

Mixture-of-Experts (MoE) language models are universally sensitive to demographic content at the routing level, yet exploiting this sensitivity for fairness control is structurally limited. We introduce Fairness-Aware Routing Equilibrium (FARE), a diagnostic framework designed to probe the limits of routing-level stereotype intervention across diverse MoE architectures. FARE reveals that routing-level preference shifts are either unachievable (Mixtral, Qwen1.5, Qwen3), statistically non-robust (DeepSeekMoE), or accompanied by substantial utility cost (OLMoE, -4.4%p CrowS-Pairs at -6.3%p TQA). Critically, even where log-likelihood preference shifts are robust, they do not transfer to decoded generation: expanded evaluations on both non-null models yield null results across all generation metrics. Group-level expert masking reveals why: bias and core knowledge are deeply entangled within expert groups. These findings indicate that routing sensitivity is necessary but insufficient for stereotype control, and identify specific architectural conditions that can inform the design of more controllable future MoE systems.

Routing Sensitivity Without Controllability: A Diagnostic Study of Fairness in MoE Language Models

Abstract

Mixture-of-Experts (MoE) language models are universally sensitive to demographic content at the routing level, yet exploiting this sensitivity for fairness control is structurally limited. We introduce Fairness-Aware Routing Equilibrium (FARE), a diagnostic framework designed to probe the limits of routing-level stereotype intervention across diverse MoE architectures. FARE reveals that routing-level preference shifts are either unachievable (Mixtral, Qwen1.5, Qwen3), statistically non-robust (DeepSeekMoE), or accompanied by substantial utility cost (OLMoE, -4.4%p CrowS-Pairs at -6.3%p TQA). Critically, even where log-likelihood preference shifts are robust, they do not transfer to decoded generation: expanded evaluations on both non-null models yield null results across all generation metrics. Group-level expert masking reveals why: bias and core knowledge are deeply entangled within expert groups. These findings indicate that routing sensitivity is necessary but insufficient for stereotype control, and identify specific architectural conditions that can inform the design of more controllable future MoE systems.

Paper Structure

This paper contains 25 sections, 7 equations, 3 figures, 8 tables.

Figures (3)

  • Figure 1: Illustrative OLMoE layer-10 example. A minimal female/male wording change produces a distributed routing shift rather than a single bias expert. Sensitivity does not imply controllability.
  • Figure 2: Overview of the FARE pipeline. Top-left: Data & Routing Extraction---neutral and demographic prompts are fed through the MoE model to obtain baseline and conditioned routing distributions. Top-right: Fairness Sensitivity Profiling (FSP)---complementary metrics (ARD, JSD, and PMI) capture routing shifts to produce an expert-level sensitivity score $\varphi(e,l)$. Bottom-left: Architecture-Aware Layer Selection (AALS)---layers are probed and selected based on their fairness-efficiency ratio $R(l)$. Bottom-right: Adaptive Routing Reweighting (ARR)---on selected layers, router logits are modified via soft reweighting to penalize fairness-sensitive experts. The bottom panel demonstrates how FARE shifts the model's preference from a stereotypical to an anti-stereotypical sentence.
  • Figure 3: AALS layer sensitivity $R(l)$ across five models. AALS-selected layers vary by architecture; DeepSeek peaks at layer 1, OLMoE in middle-to-late layers.