Table of Contents
Fetching ...

Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers

Lei Chen, Yuan Meng, Chen Tang, Xinzhu Ma, Jingyan Jiang, Xin Wang, Zhi Wang, Wenwu Zhu

TL;DR

Diffusion Transformers deliver high-quality generation but incur heavy inference costs. Q-DiT provides a targeted PTQ framework for DiTs by combining automatic, fine-grained group quantization with sample-wise dynamic activation quantization, guided by FID/FVD to optimize layer-wise granularity under bit-ops constraints. The approach yields near lossless performance at W6A8 and robust generation at W4A8, outperforming prior PTQ methods in both image and video tasks. An evolutionary-search-based granularity allocator and on-the-fly activation quantization together enable practical, high-fidelity quantization for large DiTs. The work is complemented by code release for reproducibility.

Abstract

Recent advancements in diffusion models, particularly the architectural transformation from UNet-based models to Diffusion Transformers (DiTs), significantly improve the quality and scalability of image and video generation. However, despite their impressive capabilities, the substantial computational costs of these large-scale models pose significant challenges for real-world deployment. Post-Training Quantization (PTQ) emerges as a promising solution, enabling model compression and accelerated inference for pretrained models, without the costly retraining. However, research on DiT quantization remains sparse, and existing PTQ frameworks, primarily designed for traditional diffusion models, tend to suffer from biased quantization, leading to notable performance degradation. In this work, we identify that DiTs typically exhibit significant spatial variance in both weights and activations, along with temporal variance in activations. To address these issues, we propose Q-DiT, a novel approach that seamlessly integrates two key techniques: automatic quantization granularity allocation to handle the significant variance of weights and activations across input channels, and sample-wise dynamic activation quantization to adaptively capture activation changes across both timesteps and samples. Extensive experiments conducted on ImageNet and VBench demonstrate the effectiveness of the proposed Q-DiT. Specifically, when quantizing DiT-XL/2 to W6A8 on ImageNet ($256 \times 256$), Q-DiT achieves a remarkable reduction in FID by 1.09 compared to the baseline. Under the more challenging W4A8 setting, it maintains high fidelity in image and video generation, establishing a new benchmark for efficient, high-quality quantization in DiTs. Code is available at \href{https://github.com/Juanerx/Q-DiT}{https://github.com/Juanerx/Q-DiT}.

Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers

TL;DR

Diffusion Transformers deliver high-quality generation but incur heavy inference costs. Q-DiT provides a targeted PTQ framework for DiTs by combining automatic, fine-grained group quantization with sample-wise dynamic activation quantization, guided by FID/FVD to optimize layer-wise granularity under bit-ops constraints. The approach yields near lossless performance at W6A8 and robust generation at W4A8, outperforming prior PTQ methods in both image and video tasks. An evolutionary-search-based granularity allocator and on-the-fly activation quantization together enable practical, high-fidelity quantization for large DiTs. The work is complemented by code release for reproducibility.

Abstract

Recent advancements in diffusion models, particularly the architectural transformation from UNet-based models to Diffusion Transformers (DiTs), significantly improve the quality and scalability of image and video generation. However, despite their impressive capabilities, the substantial computational costs of these large-scale models pose significant challenges for real-world deployment. Post-Training Quantization (PTQ) emerges as a promising solution, enabling model compression and accelerated inference for pretrained models, without the costly retraining. However, research on DiT quantization remains sparse, and existing PTQ frameworks, primarily designed for traditional diffusion models, tend to suffer from biased quantization, leading to notable performance degradation. In this work, we identify that DiTs typically exhibit significant spatial variance in both weights and activations, along with temporal variance in activations. To address these issues, we propose Q-DiT, a novel approach that seamlessly integrates two key techniques: automatic quantization granularity allocation to handle the significant variance of weights and activations across input channels, and sample-wise dynamic activation quantization to adaptively capture activation changes across both timesteps and samples. Extensive experiments conducted on ImageNet and VBench demonstrate the effectiveness of the proposed Q-DiT. Specifically, when quantizing DiT-XL/2 to W6A8 on ImageNet (), Q-DiT achieves a remarkable reduction in FID by 1.09 compared to the baseline. Under the more challenging W4A8 setting, it maintains high fidelity in image and video generation, establishing a new benchmark for efficient, high-quality quantization in DiTs. Code is available at \href{https://github.com/Juanerx/Q-DiT}{https://github.com/Juanerx/Q-DiT}.

Paper Structure

This paper contains 12 sections, 6 equations, 5 figures, 6 tables, 1 algorithm.

Figures (5)

  • Figure 1: Overview of the proposed Q-DiT. The weights and activations within each layer are quantized with the same group size. Group size configurations allocated for each layer are based on the evolutionary search results, which are guided by the FID/FVD score between the real samples and samples generated by the quantized model. The activations are dynamically quantized during runtime.
  • Figure 2: Distributions of weights and activations in different layers of DiT-XL/2. The red peaks indicate higher values, while the blue areas represent lower values.
  • Figure 3: Box plot showing the distribution of activation values across various timesteps (from 50 to 0) for the DiT-XL/2 model when generating one image from ImageNet at $256 \times 256$ resolution..
  • Figure 4: Standard deviation of activations in MLP and attention layers across different blocks over 50 timesteps for DiT-XL/2 when generating one image from ImageNet at $256 \times 256$ resolution.
  • Figure 5: Qualititive results. Samples generated by G4W+P4A (one of our baselines) and Q-DiT with W4A8 on ImageNet 256$\times$256 and ImageNet 512$\times$512. For each example (a-e), the image generated by G4W+P4A shows notable artifacts and distortions. In contrast, our method produces cleaner and more realistic images, with better preservation of textures.