Table of Contents
Fetching ...

Diagonal-Tiled Mixed-Precision Attention for Efficient Low-Bit MXFP Inference

Yifu Ding, Xinhao Zhang, Jinyang Guo

Abstract

Transformer-based large language models (LLMs) have demonstrated remarkable performance across a wide range of real-world tasks, but their inference cost remains prohibitively high due to the quadratic complexity of attention and the memory bandwidth limitations of high-precision operations. In this work, we present a low-bit mixed-precision attention kernel using the microscaling floating-point (MXFP) data format, utilizing the computing capability on next-generation GPU architectures. Our Diagonal-Tiled Mixed-Precision Attention (DMA) incorporates two kinds of low-bit computation at the tiling-level, and is a delicate fused kernel implemented using Triton, exploiting hardware-level parallelism and memory efficiency to enable fast and efficient inference without compromising model performance. Extensive empirical evaluations on NVIDIA B200 GPUs show that our kernel maintains generation quality with negligible degradation, and meanwhile achieves significant speedup by kernel fusion. We release our code at https://github.com/yifu-ding/MP-Sparse-Attn.

Diagonal-Tiled Mixed-Precision Attention for Efficient Low-Bit MXFP Inference

Abstract

Transformer-based large language models (LLMs) have demonstrated remarkable performance across a wide range of real-world tasks, but their inference cost remains prohibitively high due to the quadratic complexity of attention and the memory bandwidth limitations of high-precision operations. In this work, we present a low-bit mixed-precision attention kernel using the microscaling floating-point (MXFP) data format, utilizing the computing capability on next-generation GPU architectures. Our Diagonal-Tiled Mixed-Precision Attention (DMA) incorporates two kinds of low-bit computation at the tiling-level, and is a delicate fused kernel implemented using Triton, exploiting hardware-level parallelism and memory efficiency to enable fast and efficient inference without compromising model performance. Extensive empirical evaluations on NVIDIA B200 GPUs show that our kernel maintains generation quality with negligible degradation, and meanwhile achieves significant speedup by kernel fusion. We release our code at https://github.com/yifu-ding/MP-Sparse-Attn.

Paper Structure

This paper contains 33 sections, 1 equation, 2 figures, 8 tables, 3 algorithms.

Figures (2)

  • Figure 1: Visualization of quantization error of MXFP4 and NVFP4 format for query, key and attention score.
  • Figure 2: Overview workflow of our Diagonal-Tiled Mixed-Precision Attention. It first applies fused mixed-precision quantization to produce low-bit and high-bit $Q$ and $K$ representations, and then performs diagonal-tiled mixed-precision attention, using higher precision near the diagonal and lower precision elsewhere to balance accuracy and efficiency.