Table of Contents
Fetching ...

$R_\text{dm}$: Re-conceptualizing Distribution Matching as a Reward for Diffusion Distillation

Linqian Fan, Peiqin Sun, Tiancheng Wen, Shun Lu, Chengru Song

Abstract

Diffusion models achieve state-of-the-art generative performance but are fundamentally bottlenecked by their slow, iterative sampling process. While diffusion distillation techniques enable high-fidelity, few-step generation, traditional objectives often restrict the student's performance by anchoring it solely to the teacher. Recent approaches have attempted to break this ceiling by integrating Reinforcement Learning (RL), typically through a simple summation of distillation and RL objectives. In this work, we propose a novel paradigm by re-conceptualizing distribution matching as a reward, denoted as $R_\text{dm}$. This unified perspective bridges the algorithmic gap between Diffusion Matching Distillation (DMD) and RL, providing several primary benefits. (1) Enhanced Optimization Stability: We introduce Group Normalized Distribution Matching (GNDM), which adapts standard RL group normalization to stabilize $R_\text{dm}$ estimation. By leveraging group-mean statistics, GNDM establishes a more robust and effective optimization direction. (2) Seamless Reward Integration: Our reward-centric formulation inherently supports adaptive weighting mechanisms, allowing for the fluid combination of DMD with external reward models. (3) Improved Sampling Efficiency: By aligning with RL principles, the framework readily incorporates Importance Sampling (IS), leading to a significant boost in sampling efficiency. Extensive experiments demonstrate that GNDM outperforms vanilla DMD, reducing the FID by 1.87. Furthermore, our multi-reward variant, GNDMR, surpasses existing baselines by striking an optimal balance between aesthetic quality and fidelity, achieving a peak HPS of 30.37 and a low FID-SD of 12.21. Ultimately, $R_\text{dm}$ provides a flexible, stable, and efficient framework for real-time, high-fidelity synthesis. Codes are coming soon.

$R_\text{dm}$: Re-conceptualizing Distribution Matching as a Reward for Diffusion Distillation

Abstract

Diffusion models achieve state-of-the-art generative performance but are fundamentally bottlenecked by their slow, iterative sampling process. While diffusion distillation techniques enable high-fidelity, few-step generation, traditional objectives often restrict the student's performance by anchoring it solely to the teacher. Recent approaches have attempted to break this ceiling by integrating Reinforcement Learning (RL), typically through a simple summation of distillation and RL objectives. In this work, we propose a novel paradigm by re-conceptualizing distribution matching as a reward, denoted as . This unified perspective bridges the algorithmic gap between Diffusion Matching Distillation (DMD) and RL, providing several primary benefits. (1) Enhanced Optimization Stability: We introduce Group Normalized Distribution Matching (GNDM), which adapts standard RL group normalization to stabilize estimation. By leveraging group-mean statistics, GNDM establishes a more robust and effective optimization direction. (2) Seamless Reward Integration: Our reward-centric formulation inherently supports adaptive weighting mechanisms, allowing for the fluid combination of DMD with external reward models. (3) Improved Sampling Efficiency: By aligning with RL principles, the framework readily incorporates Importance Sampling (IS), leading to a significant boost in sampling efficiency. Extensive experiments demonstrate that GNDM outperforms vanilla DMD, reducing the FID by 1.87. Furthermore, our multi-reward variant, GNDMR, surpasses existing baselines by striking an optimal balance between aesthetic quality and fidelity, achieving a peak HPS of 30.37 and a low FID-SD of 12.21. Ultimately, provides a flexible, stable, and efficient framework for real-time, high-fidelity synthesis. Codes are coming soon.

Paper Structure

This paper contains 25 sections, 22 equations, 10 figures, 4 tables, 2 algorithms.

Figures (10)

  • Figure 1: (Top) Samples from 4-step vanilla DMD and our GNDM. (Bottom) Samples from 4-step DMDR and our GNDMR. Our models achieve better perceptual fidelity with fewer artifacts and better details.
  • Figure 2: Our unified reward framework GNDMR. After re-conceptualizing distribution matching as a reward, $R_{\text{dm}}$ and other rewards perform GRPO simultaneously.
  • Figure 3: Larger diffused timesteps, higher score variance.
  • Figure 4: Qualitative Results. Our GNDMR has better aesthetics than other models and fewer artifacts than DMDR.
  • Figure 5: Importance Sampling (IS) improves sampling efficiency. (a) With the same number of training steps, larger batch sizes lead to better reward optimization, but (b) they also require more samples. By introducing IS, the 32×16 (w/ IS) setting achieves comparable performance under a reduced sampling budget.
  • ...and 5 more figures