Table of Contents
Fetching ...

Model Merging via Data-Free Covariance Estimation

Marawan Gamal Abdel Hameed, Derek Tam, Pascal Jr Tikeng Notsawo, Colin Raffel, Guillaume Rabusseau

Abstract

Model merging provides a way of cheaply combining individual models to produce a model that inherits each individual's capabilities. While some merging methods can approach the performance of multitask training, they are often heuristically motivated and lack theoretical justification. A principled alternative is to pose model merging as a layer-wise optimization problem that directly minimizes interference between tasks. However, this formulation requires estimating per-layer covariance matrices from data, which may not be available when performing merging. In contrast, many of the heuristically-motivated methods do not require auxiliary data, making them practically advantageous. In this work, we revisit the interference minimization framework and show that, under certain conditions, covariance matrices can be estimated directly from difference matrices, eliminating the need for data while also reducing computational costs. We validate our approach across vision and language benchmarks on models ranging from 86M parameters to 7B parameters, outperforming previous data-free state-of-the-art merging methods

Model Merging via Data-Free Covariance Estimation

Abstract

Model merging provides a way of cheaply combining individual models to produce a model that inherits each individual's capabilities. While some merging methods can approach the performance of multitask training, they are often heuristically motivated and lack theoretical justification. A principled alternative is to pose model merging as a layer-wise optimization problem that directly minimizes interference between tasks. However, this formulation requires estimating per-layer covariance matrices from data, which may not be available when performing merging. In contrast, many of the heuristically-motivated methods do not require auxiliary data, making them practically advantageous. In this work, we revisit the interference minimization framework and show that, under certain conditions, covariance matrices can be estimated directly from difference matrices, eliminating the need for data while also reducing computational costs. We validate our approach across vision and language benchmarks on models ranging from 86M parameters to 7B parameters, outperforming previous data-free state-of-the-art merging methods

Paper Structure

This paper contains 29 sections, 10 theorems, 42 equations, 6 figures, 6 tables.

Key Result

Theorem 3.1

Consider a linear layer fine-tuned using full-batch gradient descent for $K$ iterations with learning rate $\eta$, and let ${\bm{z}}^{(k)}$, ${\bm{g}}^{(k)}$ denote the layer's input and its output gradient at iteration $k$, respectively. Define the accumulated gradient mean, accumulated second mome where the expectation is taken over the $t$-th distribution $\mathcal{D}_t$, and $\|\cdot\|$ denote

Figures (6)

  • Figure 1: Left: RegMean requires data to compute activation covariances ${\bm{C}}_t$, while ACTMat estimates them directly from the difference matrices as ${\bm{\Delta}}_t^\top{\bm{\Delta}}_t \approx \mathbf{C}_t$. Right: On T5-Large, ACTMat nearly matches RegMean's accuracy without any data, while substantially outperforming other data-free baselines.
  • Figure 2: Empirical measurement of the three angular error terms in Theorem \ref{['thm:cov-estimation']} on ViT-B/16. (a) Cross-term error ${\epsilon}^{(\mathrm{cross})}$. (b) Correlation error ${\epsilon}^{(\mathrm{corr})}$. (c) Drift error ${\epsilon}^{(\mathrm{drift})}$ measured during training. All three terms remain small across layers and tasks, indicating that ${\bm{\Delta}}_t^\top{\bm{\Delta}}_t$ is well-aligned with the final covariance ${\bm{C}}_t^{(K)}$.
  • Figure 3: Distribution of scaling-coefficient ratios $\kappa_i / \kappa_j$ over all dataset pairs, for each layer in the ViT-B/16 model. $\kappa_i = \|{\bm{C}}_i\|_F / \|{\bm{\Delta}}_i^\top{\bm{\Delta}}_i\|_F$ and measures the ratio of the norm of the covariance matrix to the norm of the ACTMat estimate of the covariance matrix.
  • Figure 4: Comparison between test accuracy of merging methods across multiple settings (NLP models fine-tuned on 7 tasks and vision models fine-tuned on 8 tasks). Hatched bars indicate that the method is not data-free. Stacked bars with a dotted pattern indicate the performance of the method (bottom bar) and performance of the method when combined with KnOTS stoica2025knots for LoRA fine-tuned models (improvement only for Iso-C).
  • Figure 5: Absolute Pearson correlation coefficients.
  • ...and 1 more figures

Theorems & Definitions (17)

  • Theorem 3.1: Covariance Estimation
  • Proposition 3.1
  • Theorem 3.2
  • proof
  • Lemma B.1
  • proof
  • Lemma B.2
  • proof
  • Theorem B.2: Covariance Estimation
  • proof
  • ...and 7 more