Table of Contents
Fetching ...

TRIMS: Trajectory-Ranked Instruction Masked Supervision for Diffusion Language Models

Lingjie Chen, Ruizhong Qiu, Yuyu Fan, Yanjun Zhao, Hanghang Tong

Abstract

Diffusion language models (DLMs) offer a promising path toward low-latency generation through parallel decoding, but their practical efficiency depends heavily on the decoding trajectory. In practice, this advantage often fails to fully materialize because standard training does not provide explicit supervision over token reveal order, creating a train-inference mismatch that leads to suboptimal decoding behavior. We propose Trajectory-Ranked Instruction Masked Supervision (TRIMS), a simple trajectory-guided supervised fine-tuning framework that injects trajectory supervision into standard Masked Diffusion Language Model (MDLM) training with minimal overhead. Instead of relying on costly DLM-based distillation, TRIMS uses lightweight signals from an autoregressive teacher to guide a trajectory-aware masking strategy, encouraging the model to learn more effective decoding orders. Experiments on LLaDA and Dream across math and coding benchmarks show that TRIMS significantly improves the accuracy-parallelism trade-off over both standard MDLM training and train-free acceleration baselines, while achieving competitive performance with prior distillation-based approaches at substantially lower training cost. Further analysis shows that TRIMS leads to better decoding trajectories, validating the effectiveness of trajectory-guided supervision for DLMs.

TRIMS: Trajectory-Ranked Instruction Masked Supervision for Diffusion Language Models

Abstract

Diffusion language models (DLMs) offer a promising path toward low-latency generation through parallel decoding, but their practical efficiency depends heavily on the decoding trajectory. In practice, this advantage often fails to fully materialize because standard training does not provide explicit supervision over token reveal order, creating a train-inference mismatch that leads to suboptimal decoding behavior. We propose Trajectory-Ranked Instruction Masked Supervision (TRIMS), a simple trajectory-guided supervised fine-tuning framework that injects trajectory supervision into standard Masked Diffusion Language Model (MDLM) training with minimal overhead. Instead of relying on costly DLM-based distillation, TRIMS uses lightweight signals from an autoregressive teacher to guide a trajectory-aware masking strategy, encouraging the model to learn more effective decoding orders. Experiments on LLaDA and Dream across math and coding benchmarks show that TRIMS significantly improves the accuracy-parallelism trade-off over both standard MDLM training and train-free acceleration baselines, while achieving competitive performance with prior distillation-based approaches at substantially lower training cost. Further analysis shows that TRIMS leads to better decoding trajectories, validating the effectiveness of trajectory-guided supervision for DLMs.

Paper Structure

This paper contains 32 sections, 5 equations, 9 figures, 3 tables, 1 algorithm.

Figures (9)

  • Figure 1: TRIMS improves the accuracy-parallelism trade-off on both LLaDA-Instruct (left) and Dream-Instruct (right), achieving higher TPS (tokens predicted per step) while maintaining competitive accuracy, with much lower training cost than distillation-based methods.
  • Figure 2: TRIMS consists of three stages: (1) an AR teacher estimates token-level difficulty scores, (2) scores are discretized into ordered buckets, and (3) bucket assignments drive trajectory-aware masking that simulates a hard-to-easy decoding order during training.
  • Figure 3: Accuracy-parallelism trade-off of TRIMS and baselines on LLaDA-Instruct. The upper-right region indicates better performance with higher accuracy and parallelism.
  • Figure 4: Accuracy-parallelism trade-off of TRIMS and baselines on Dream-Instruct.
  • Figure 5: Effect of trajectory supervision compared with standard MDLM training. TRIMS consistently improves the accuracy-parallelism trade-off on most benchmarks, with especially clear gains on coding tasks.
  • ...and 4 more figures