Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning
Arijit Sehanobish, Avinava Dubey, Krzysztof Choromanski, Somnath Basu Roy Chowdhury, Deepali Jain, Vikas Sindhwani, Snigdha Chaturvedi
TL;DR
This paper addresses the high cost of fine-tuning large transformers by proposing Structured Unrestricted-Rank Matrices (SURM) as a general, parameter-efficient alternative to traditional low-rank updates. SURM encompasses Low Displacement Rank Matrices and Kronecker products, enabling drop-in replacements for Adapters and LoRA with flexible expressiveness and compact parameter budgets. Extensive experiments across vision and NLP tasks show SURMs achieving 5-7% image accuracy gains over LoRA and up to 12x parameter reductions on GLUE, with Circulant SURMs often delivering the best performance and efficiency. The results suggest SURMs can substantially reduce training and storage requirements while preserving or enhancing downstream accuracy, potentially broadening access to large-model fine-tuning in diverse settings.
Abstract
Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei et al., 2022). However, fine-tuning these models for downstream tasks is expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative by allowing us to fine-tune models by updating only a small number of parameters. In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on structured unrestricted-rank matrices (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs provides more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using low displacement rank matrices (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve 5-7% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA. It also results in up to 12x reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark.
