Table of Contents
Fetching ...

RUQuant: Towards Refining Uniform Quantization for Large Language Models

Han Liu, Haotian Gao, Changya Li, Feng Zhang, Xiaotong Zhang, Wei Wang, Hong Yu

Abstract

The increasing size and complexity of large language models (LLMs) have raised significant challenges in deployment efficiency, particularly under resource constraints. Post-training quantization (PTQ) has emerged as a practical solution by compressing models without requiring retraining. While existing methods focus on uniform quantization schemes for both weights and activations, they often suffer from substantial accuracy degradation due to the non-uniform nature of activation distributions. In this work, we revisit the activation quantization problem from a theoretical perspective grounded in the Lloyd-Max optimality conditions. We identify the core issue as the non-uniform distribution of activations within the quantization interval, which causes the optimal quantization point under the Lloyd-Max criterion to shift away from the midpoint of the interval. To address this issue, we propose a two-stage orthogonal transformation method, RUQuant. In the first stage, activations are divided into blocks. Each block is mapped to uniformly sampled target vectors using composite orthogonal matrices, which are constructed from Householder reflections and Givens rotations. In the second stage, a global Householder reflection is fine-tuned to further minimize quantization error using Transformer output discrepancies. Empirical results show that our method achieves near-optimal quantization performance without requiring model fine-tuning: RUQuant achieves 99.8% of full-precision accuracy with W6A6 and 97% with W4A4 quantization for a 13B LLM, within approximately one minute. A fine-tuned variant yields even higher accuracy, demonstrating the effectiveness and scalability of our approach.

RUQuant: Towards Refining Uniform Quantization for Large Language Models

Abstract

The increasing size and complexity of large language models (LLMs) have raised significant challenges in deployment efficiency, particularly under resource constraints. Post-training quantization (PTQ) has emerged as a practical solution by compressing models without requiring retraining. While existing methods focus on uniform quantization schemes for both weights and activations, they often suffer from substantial accuracy degradation due to the non-uniform nature of activation distributions. In this work, we revisit the activation quantization problem from a theoretical perspective grounded in the Lloyd-Max optimality conditions. We identify the core issue as the non-uniform distribution of activations within the quantization interval, which causes the optimal quantization point under the Lloyd-Max criterion to shift away from the midpoint of the interval. To address this issue, we propose a two-stage orthogonal transformation method, RUQuant. In the first stage, activations are divided into blocks. Each block is mapped to uniformly sampled target vectors using composite orthogonal matrices, which are constructed from Householder reflections and Givens rotations. In the second stage, a global Householder reflection is fine-tuned to further minimize quantization error using Transformer output discrepancies. Empirical results show that our method achieves near-optimal quantization performance without requiring model fine-tuning: RUQuant achieves 99.8% of full-precision accuracy with W6A6 and 97% with W4A4 quantization for a 13B LLM, within approximately one minute. A fine-tuned variant yields even higher accuracy, demonstrating the effectiveness and scalability of our approach.

Paper Structure

This paper contains 31 sections, 2 theorems, 44 equations, 3 figures, 10 tables, 4 algorithms.

Key Result

Theorem 1

Let $\mathbf{x} \in \mathbb{R}^d$ be an activation vector sampled from $\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$, and let $\mathbf{Q} \in \mathbb{R}^{d \times d}$ be an orthogonal matrix such that $\mathbf{Qx} = \mathbf{u}$, where $\mathbf{u}$ is a fixed vector sampled from a uniform dist where $\mathbf{I}$ is the identity matrix. $\blacktriangleleft$$\blacktriangleleft$

Figures (3)

  • Figure 1: The mean and covariance visualizations of activations before and after RUQuant processing. (a) Original mean vector. (b) Mean vector after RUQuant processing. (c) Original covariance matrix. (d) Covariance matrix after RUQuant processing.
  • Figure 2: The overall framework of RUQuant (we omitted the zigzag permutation process in the figure). Original activations with size $d \times N$ are reshaped into $B \times \frac{dN}{B}$, all column vectors sharing a common rotation matrix. In Step 1, sampled vector $\mathbf{x} \in \mathbb{R}^{B}$ is transformed using Householder and Givens rotations based on uniformly generated vectors $\mathbf{u}_1$ and $\mathbf {u}_2$, then reconstructed. In Step 2, a trainable Householder matrix is optimized to minimize quantization loss.
  • Figure 3: The mean and covariance visualizations of weights before and after RUQuant processing. (a) Original mean vector. (b) Mean vector after RUQuant processing. (c) Original covariance matrix. (d) Covariance matrix after RUQuant processing.

Theorems & Definitions (2)

  • Theorem 1: Smoothing effect of orthogonal transformation on activations
  • Theorem 2: Smoothing effect of orthogonal transformation on weight