Table of Contents
Fetching ...

Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning

Arijit Sehanobish, Avinava Dubey, Krzysztof Choromanski, Somnath Basu Roy Chowdhury, Deepali Jain, Vikas Sindhwani, Snigdha Chaturvedi

TL;DR

This paper addresses the high cost of fine-tuning large transformers by proposing Structured Unrestricted-Rank Matrices (SURM) as a general, parameter-efficient alternative to traditional low-rank updates. SURM encompasses Low Displacement Rank Matrices and Kronecker products, enabling drop-in replacements for Adapters and LoRA with flexible expressiveness and compact parameter budgets. Extensive experiments across vision and NLP tasks show SURMs achieving 5-7% image accuracy gains over LoRA and up to 12x parameter reductions on GLUE, with Circulant SURMs often delivering the best performance and efficiency. The results suggest SURMs can substantially reduce training and storage requirements while preserving or enhancing downstream accuracy, potentially broadening access to large-model fine-tuning in diverse settings.

Abstract

Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei et al., 2022). However, fine-tuning these models for downstream tasks is expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative by allowing us to fine-tune models by updating only a small number of parameters. In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on structured unrestricted-rank matrices (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs provides more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using low displacement rank matrices (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve 5-7% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA. It also results in up to 12x reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark.

Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning

TL;DR

This paper addresses the high cost of fine-tuning large transformers by proposing Structured Unrestricted-Rank Matrices (SURM) as a general, parameter-efficient alternative to traditional low-rank updates. SURM encompasses Low Displacement Rank Matrices and Kronecker products, enabling drop-in replacements for Adapters and LoRA with flexible expressiveness and compact parameter budgets. Extensive experiments across vision and NLP tasks show SURMs achieving 5-7% image accuracy gains over LoRA and up to 12x parameter reductions on GLUE, with Circulant SURMs often delivering the best performance and efficiency. The results suggest SURMs can substantially reduce training and storage requirements while preserving or enhancing downstream accuracy, potentially broadening access to large-model fine-tuning in diverse settings.

Abstract

Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei et al., 2022). However, fine-tuning these models for downstream tasks is expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative by allowing us to fine-tune models by updating only a small number of parameters. In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on structured unrestricted-rank matrices (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs provides more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using low displacement rank matrices (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve 5-7% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA. It also results in up to 12x reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark.

Paper Structure

This paper contains 28 sections, 2 theorems, 17 equations, 11 figures, 8 tables.

Key Result

Theorem A.2

The set of matrices $\mathbf{M}$ which can be written as in Equation eqn:sum_circ contains:

Figures (11)

  • Figure 1: Left: Approximating a PSD matrix using a low rank matrix, Kronecker product of matrices, circulant matrix, and Toeplitz matrix. We repeat our experiment $\mathbf{10}$ times and for each trial, we observe that low rank matrix is the worst approximator followed by Kronecker product, circulant, and Toeplitz. Right: The tradeoff between accuracy and parameter numbers of various PEFT methods. Results are measured across 5 image datasets using CLIP-ViT. Our methods appear in the top right corner (in blue) and achieve the best performance among various strong baseline methods.
  • Figure 2: A schematic diagram to illustrate the structure (a) Circulant, (b) Toeplitz, and (c) Kronecker product of two matrices $\mathbf{A}$ and $\mathbf{B}$.
  • Figure 3: A circulant matrix with the first column given by a vector $(c_{0},c_{1},c_{2},c_{3},c_{4})$ can be re-written as a linear combination of the orthogonal base circulant matrices (5 matrices with orange-entries corresponding to one and other to zero). Such a closed-form decomposition is in general not possible for matrices $\mathbf{W}(\mathbf{G},\mathbf{H})$ and thus optimal approximators are found by gradient-descent.
  • Figure 4: Fitting the pinwheel dataset with a frozen embedding layer using various SURM-based PEFT methods and LoRA.
  • Figure 5: Illustration of the approximation capabilities of different LDRMs. The $y$-axis depicts the relative Frobenius norm error ${\|\mathbf{A}-\mathbf{M}\|_{\mathrm{F}}}{/\|\mathbf{M}\|_{\mathrm{F}}}$ between the groundtruth $\mathbf{M}$ and the approximator $\mathbf{A}$. (Left Column Top): We approximate a random Gaussian matrix $\mathbf{M}$ with matrices $\mathbf{W}(\mathbf{G},\mathbf{H})$ using different $r$ (LDR: $r$). (Left Column Middle) We approximate near-low-rank matrices $\mathbf{M}$ using smaller values of $r$. (Left Column Bottom): Similar setup to approximate near-low-intrinsic-rank matrices $\mathbf{M}$. (Right Column): We perform analogous studies with circulant and Toeplitz matrices, where the ground truth has low rank or low-intrinsic rank.
  • ...and 6 more figures

Theorems & Definitions (3)

  • Definition A.1: Skew-Circulant
  • Theorem A.2: Expressivity
  • Theorem A.3