Table of Contents
Fetching ...

Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression

Ruoling Qi, Yirui Liu, Xuaner Wu, Xiangyu Wang, Ming Li, Chen Chen, Jian Chen, Yin Chen, Qizhen Weng

Abstract

The deployment of Large Language Models is constrained by the memory and bandwidth demands of static weights and dynamic Key-Value cache. SVD-based compression provides a hardware-friendly solution to reduce these costs. However, existing methods suffer from two key limitations: some are suboptimal in reconstruction error, while others are theoretically optimal but practically inefficient. In this paper, we propose Swift-SVD, an activation-aware, closed-form compression framework that simultaneously guarantees theoretical optimum, practical efficiency and numerical stability. Swift-SVD incrementally aggregates covariance of output activations given a batch of inputs and performs a single eigenvalue decomposition after aggregation, enabling training-free, fast, and optimal layer-wise low-rank approximation. We employ effective rank to analyze local layer-wise compressibility and design a dynamic rank allocation strategy that jointly accounts for local reconstruction loss and end-to-end layer importance. Extensive experiments across six LLMs and eight datasets demonstrate that Swift-SVD outperforms state-of-the-art baselines, achieving optimal compression accuracy while delivering 3-70X speedups in end-to-end compression time. Our code will be released upon acceptance.

Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression

Abstract

The deployment of Large Language Models is constrained by the memory and bandwidth demands of static weights and dynamic Key-Value cache. SVD-based compression provides a hardware-friendly solution to reduce these costs. However, existing methods suffer from two key limitations: some are suboptimal in reconstruction error, while others are theoretically optimal but practically inefficient. In this paper, we propose Swift-SVD, an activation-aware, closed-form compression framework that simultaneously guarantees theoretical optimum, practical efficiency and numerical stability. Swift-SVD incrementally aggregates covariance of output activations given a batch of inputs and performs a single eigenvalue decomposition after aggregation, enabling training-free, fast, and optimal layer-wise low-rank approximation. We employ effective rank to analyze local layer-wise compressibility and design a dynamic rank allocation strategy that jointly accounts for local reconstruction loss and end-to-end layer importance. Extensive experiments across six LLMs and eight datasets demonstrate that Swift-SVD outperforms state-of-the-art baselines, achieving optimal compression accuracy while delivering 3-70X speedups in end-to-end compression time. Our code will be released upon acceptance.

Paper Structure

This paper contains 22 sections, 1 theorem, 10 equations, 5 figures, 6 tables, 2 algorithms.

Key Result

Theorem 3.1

Given input activations $X$ and weight matrix $W$, let $\mathcal{V}$ and $\Sigma$ denote the right singular vectors and singular values of $Y=XW$, respectively. For any $k<\text{rank}(Y)$, the optimal solution to the problem defined in eq_problem_1 and eq_problem_2 is, where $\mathcal{V}_k\in \mathbb{R}^{n\times k}$ consists of the top-$k$ right singular vectors corresponding to the $k$ largest s

Figures (5)

  • Figure 1: Swift-SVD for static weights and KV cache reduction.
  • Figure 2: Overview of Swift-SVD. a) Optimal Activation-Aware Low-Rank Compression: At each transformer layer, Swift-SVD hooks the output activation $Y=XW$ and incrementally aggregates the covariance matrix $Y^TY$. A single eigenvalue decomposition of this covariance yields the singular values $\Sigma$ and right singular vectors $\mathcal{V}$, from which the optimal activation-aware compression matrix $W^*_k$ and minimal reconstruction loss $\epsilon^*_k$ are derived; b) Dynamic Compression: Swift-SVD generates a set of candidate dynamic rank allocation schemes that jointly consider local layer-wise Frobenius loss $\epsilon^*$ and end-to-end layer importance $\beta$. A lightweight grid search is then performed over these candidates—each model is compressed using the optimal solution in a) and evaluated on a validation set—to select the configuration that yields the best end-to-end performance.
  • Figure 3: Layer-wise NER across distinct modules and layer importance in Mistral-7B with dataset C4.
  • Figure 4: Impact of calibration sample size $N$ on model performance. We report average accuracy on zero-shot tasks (left) and PPL on C4 (right) across three compression ratios.
  • Figure 5: Throughput improvement and memory efficiency under batch size of 16. The generated sequence length is 1024.

Theorems & Definitions (2)

  • Theorem 3.1
  • proof