Table of Contents
Fetching ...

Expert-Choice Routing Enables Adaptive Computation in Diffusion Language Models

Shuibai Zhang, Caspian Zhuang, Chihan Cui, Zhihan Yang, Fred Zhangzhi Peng, Yanxin Zhang, Haoyue Bai, Zack Jia, Yang Zhou, Guanhua Chen, Ming Liu

Abstract

Diffusion language models (DLMs) enable parallel, non-autoregressive text generation, yet existing DLM mixture-of-experts (MoE) models inherit token-choice (TC) routing from autoregressive systems, leading to load imbalance and rigid computation allocation. We show that expert-choice (EC) routing is a better fit for DLMs: it provides deterministic load balancing by design, yielding higher throughput and faster convergence than TC. Building on the property that EC capacity is externally controllable, we introduce timestep-dependent expert capacity, which varies expert allocation according to the denoising step. We find that allocating more capacity to low-mask-ratio steps consistently achieves the best performance under matched FLOPs, and provide a mechanistic explanation: tokens in low-mask-ratio contexts exhibit an order-of-magnitude higher learning efficiency, so concentrating compute on these steps yields the largest marginal return. Finally, we show that existing pretrained TC DLMs can be retrofitted to EC by replacing only the router, achieving faster convergence and improved accuracy across diverse downstream tasks. Together, these results establish EC routing as a superior paradigm for DLM MoE models and demonstrate that computation in DLMs can be treated as an adaptive policy rather than a fixed architectural constant. Code is available at https://github.com/zhangshuibai/EC-DLM.

Expert-Choice Routing Enables Adaptive Computation in Diffusion Language Models

Abstract

Diffusion language models (DLMs) enable parallel, non-autoregressive text generation, yet existing DLM mixture-of-experts (MoE) models inherit token-choice (TC) routing from autoregressive systems, leading to load imbalance and rigid computation allocation. We show that expert-choice (EC) routing is a better fit for DLMs: it provides deterministic load balancing by design, yielding higher throughput and faster convergence than TC. Building on the property that EC capacity is externally controllable, we introduce timestep-dependent expert capacity, which varies expert allocation according to the denoising step. We find that allocating more capacity to low-mask-ratio steps consistently achieves the best performance under matched FLOPs, and provide a mechanistic explanation: tokens in low-mask-ratio contexts exhibit an order-of-magnitude higher learning efficiency, so concentrating compute on these steps yields the largest marginal return. Finally, we show that existing pretrained TC DLMs can be retrofitted to EC by replacing only the router, achieving faster convergence and improved accuracy across diverse downstream tasks. Together, these results establish EC routing as a superior paradigm for DLM MoE models and demonstrate that computation in DLMs can be treated as an adaptive policy rather than a fixed architectural constant. Code is available at https://github.com/zhangshuibai/EC-DLM.

Paper Structure

This paper contains 58 sections, 7 equations, 13 figures, 6 tables.

Figures (13)

  • Figure 1: Training loss vs. wall-clock time. EC reaches loss 3.75 in 10.6h, $2.0\times$ faster than TC (20.7h).
  • Figure 2: Left: TC (top-1) vs. EC (capacity $c\!=\!2$) routing on a $6\!\times\!3$ gating score matrix. Both methods assign the same total of 6 token--expert pairs, but TC produces imbalanced per-expert loads (1/4/1) while EC guarantees uniform loads (2/2/2) by construction. Right: GPU memory snapshot during inference of LLaDA-2.0-mini (16B) with expert parallelism across 8 H100 GPUs. TC exhibits high variance (std 3.6 GB) with one GPU using 70.3 GB while others use ${\sim}$58--64 GB. EC maintains perfectly uniform memory (std 0.0 GB).
  • Figure 3: Linear-reverse scheduling: as mask ratio $r$ decreases during denoising, per-expert capacity increases, concentrating compute on the most consequential predictions.
  • Figure 4: Scheduler comparison on OpenWebText (30B tokens, matched average FLOPs). Left: reverse schedulers; Right: forward schedulers. Reverse schedulers allocate more experts to low-mask-ratio steps and consistently outperform their forward counterparts.
  • Figure 5: 8B-A1B pretraining comparison: dynamic EC (linear-reverse, $k$=2--14) vs. static EC ($k$=8) on Nemotron-CC. Left: validation perplexity; Center: MMLU 5-shot accuracy; Right: ARC-Challenge 25-shot accuracy. Dynamic EC outperforms static EC at every checkpoint under matched average FLOPs.
  • ...and 8 more figures