Table of Contents
Fetching ...

SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models

Ziwei Li, Yuang Ma, Yi Kang

Abstract

The rapid growth of large language models (LLMs) presents significant deployment challenges due to their massive computational and memory demands. While model compression, such as network pruning, offers potential solutions, most existing methods often fail to maintain good performance at high compression ratios. To address this, we propose SLaB, a novel framework that decomposes each linear layer weight into three complementary components: a sparse matrix, a low-rank matrix, and a binary matrix. SLaB eliminates the need for retraining and leverages activation-aware pruning scores to guide the decomposition process. Experiments on Llama-family models demonstrate that SLaB achieves state-of-the-art performance, reducing perplexity by up to 36% compared to existing methods at 50% compression and improving accuracy by up to 8.98% over the baseline on zero-shot tasks.

SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models

Abstract

The rapid growth of large language models (LLMs) presents significant deployment challenges due to their massive computational and memory demands. While model compression, such as network pruning, offers potential solutions, most existing methods often fail to maintain good performance at high compression ratios. To address this, we propose SLaB, a novel framework that decomposes each linear layer weight into three complementary components: a sparse matrix, a low-rank matrix, and a binary matrix. SLaB eliminates the need for retraining and leverages activation-aware pruning scores to guide the decomposition process. Experiments on Llama-family models demonstrate that SLaB achieves state-of-the-art performance, reducing perplexity by up to 36% compared to existing methods at 50% compression and improving accuracy by up to 8.98% over the baseline on zero-shot tasks.

Paper Structure

This paper contains 23 sections, 3 theorems, 10 equations, 3 figures, 3 tables, 1 algorithm.

Key Result

Lemma 1

If $w_0\leq w_1\leq\cdots\leq w_{n-1}, n\geq2$, $\tilde{a}$ and $\tilde{b}$ are the minimizers of $f(a,b)=\sum_{k=0}^{n-1}\min\left\{(w_k-a)^2,(w_k-b)^2\right\}$. Suppose $a\leq b$, there exists $t\in\{0, 1, \cdots, n-2\}$ such that $\blacktriangleleft$$\blacktriangleleft$

Figures (3)

  • Figure 1: Compression of the Llama-2 7B model Llama2 using only low-rank and sparse matrices: perplexity comparison on the WikiText-2 dataset under different rank settings at a $50\%$ compression ratio.
  • Figure 2: Overview of the SLaB framework.
  • Figure 3: Variation of the average Frobenius norm difference between compressed and original layers with respect to rank. Experiments are conducted on the Llama-2 7B model Llama2 with a $50\%$ compression ratio.

Theorems & Definitions (6)

  • Lemma 1
  • proof
  • Proposition 1
  • proof
  • Proposition 2
  • proof