Table of Contents
Fetching ...

Decompose, Mix, Adapt: A Unified Framework for Parameter-Efficient Neural Network Recombination and Compression

Nazia Tasnim, Shrimai Prabhumoye, Bryan A. Plummer

Abstract

Parameter Recombination (PR) methods aim to efficiently compose the weights of a neural network for applications like Parameter-Efficient FineTuning (PEFT) and Model Compression (MC), among others. Most methods typically focus on one application of PR, which can make composing them challenging. For example, when deploying a large model you may wish to compress the model and also quickly adapt to new settings. However, PEFT methods often can still contain millions of parameters. This may be small compared to the original model size, but can be problematic in resource constrained deployments like edge devices, where they take a larger portion of the compressed model's parameters. To address this, we present Coefficient-gated weight Recombination by Interpolated Shared basis Projections (CRISP), a general approach that seamlessly integrates multiple PR tasks within the same framework. CRISP accomplishes this by factorizing pretrained weights into basis matrices and their component mixing projections. Sharing basis matrices across layers and adjusting its size enables us to perform MC, whereas the mixer weight's small size (fewer than 200 in some experiments) enables CRISP to support PEFT. Experiments show CRISP outperforms methods from prior work capable of dual-task applications by 4-5\% while also outperforming the state-of-the-art in PEFT by 1.5\% and PEFT+MC combinations by 1\%. Our code is available on the repository: https://github.com/appledora/CRISP-CVPR26.

Decompose, Mix, Adapt: A Unified Framework for Parameter-Efficient Neural Network Recombination and Compression

Abstract

Parameter Recombination (PR) methods aim to efficiently compose the weights of a neural network for applications like Parameter-Efficient FineTuning (PEFT) and Model Compression (MC), among others. Most methods typically focus on one application of PR, which can make composing them challenging. For example, when deploying a large model you may wish to compress the model and also quickly adapt to new settings. However, PEFT methods often can still contain millions of parameters. This may be small compared to the original model size, but can be problematic in resource constrained deployments like edge devices, where they take a larger portion of the compressed model's parameters. To address this, we present Coefficient-gated weight Recombination by Interpolated Shared basis Projections (CRISP), a general approach that seamlessly integrates multiple PR tasks within the same framework. CRISP accomplishes this by factorizing pretrained weights into basis matrices and their component mixing projections. Sharing basis matrices across layers and adjusting its size enables us to perform MC, whereas the mixer weight's small size (fewer than 200 in some experiments) enables CRISP to support PEFT. Experiments show CRISP outperforms methods from prior work capable of dual-task applications by 4-5\% while also outperforming the state-of-the-art in PEFT by 1.5\% and PEFT+MC combinations by 1\%. Our code is available on the repository: https://github.com/appledora/CRISP-CVPR26.

Paper Structure

This paper contains 19 sections, 5 equations, 8 figures, 11 tables, 6 algorithms.

Figures (8)

  • Figure 1: PR approach comparison.(a) Prior work in PR typically focuses on PEFT or MC alone Wang2024NeuralNPZhang_Luo_Yu_Li_Lin_Ye_Zhang_2024Erko2023HyperDiffusionGIhu2022lora10.5555/3692070.3693369si2024see10.5555/3600270.3601482liang2024inflora10635615pmlr-v235-nikdan24aAhmed_2025_CVPRMiniViThao2022manifoldRangwani_2024_CVPR10678046wang2025basisglandorf2025p3bpmlr-v202-shi23e10.1007/s11432-022-3646-6, which can result in efficient combinations when deployed together. (b) Our unified PR approach CRISP decomposes a pretrained models weights that support both MC and PEFT, enabling us to more effectively use parameter budgets even as tasks scale.
  • Figure 2: CRISP decomposes a pretrained weight matrix into a frozen shared basis and small, learnable mixer matrices, then retrofits these components back into the model (Sec. \ref{['sec:retrofitting']}). Compression is achieved by reducing the basis size, while adaptation is enabled by fine-tuning only the lightweight, nonlinearly gated mixer matrices (Sec. \ref{['sec:reparameterization']}) - allowing both Parameter Recombination (PR) applications to coexist within a single factorized structure with no redundant adapters).
  • Figure 3: PEFT performance using a ViT-S/16 across a range of trainable parameter budgets averaged over three datasets: FGVC-Aircraft maji13fine-grained, CIFAR-100 Krizhevsky09 and CUB-200-2011 WahCUB_200_2011. CRISP consistently outperforms prior work in all settings.
  • Figure 4: Comparing ImageNet 5206848 performance with and without 8-bit PTQ wu2020integerquantizationdeeplearning compression. We find CRISP accurately reproduces the original model's performance while also demonstrating effective compositionality with other compression techniques.
  • Figure 5: Impact of mixer matrix dimensions on model capacity and performance. (a) Fixed columns ($s=16$): Increasing rows reduces parameters but collapses accuracy. (b) Fixed rows ($r=16$): Increasing columns scales capacity and recovers performance. Results across CIFAR-100, CUB-Birds, and FGVC-Aircraft demonstrate that basis capacity (columns) is the dominant factor for maintaining model quality, while coefficient expressivity (rows) plays a secondary role. Red dotted line: original model.
  • ...and 3 more figures