Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer Merging
Deyuan Liu, Zhanyue Qin, Hairu Wang, Zhao Yang, Zecheng Wang, Fangying Rong, Qingbin Liu, Yanchao Hao, Xi Chen, Cunhang Fan, Zhao Lv, Zhiying Tu, Dianhui Chu, Bo Li, Dianbo Sui
TL;DR
Large language models pose deployment challenges due to scale. We propose Manifold-Based Knowledge Alignment and Layer Merging (MKA), which first maps per-layer activations into low-dimensional manifolds via diffusion maps and then merges similar layers by optimizing a mutual-information–driven similarity, approximated with $\alpha \approx S_{lm}$ and merged as $\tilde{\boldsymbol{\theta}}_c=\alpha\boldsymbol{\theta}_l+(1-\alpha)\boldsymbol{\theta}_m$. The method leverages the Information Bottleneck objective to preserve relevant information while compressing, yielding substantial compression with minimal accuracy loss; e.g., on MMLU with Llama3-8B, MKA achieves 43.75% compression with a 2.82% drop, and benefits further when combined with quantization (e.g., SmoothQuant, GPTQ, AWQ). Across multiple benchmarks and models, MKA outperforms traditional pruning in both compression rate and accuracy retention, offering a scalable, hardware-friendly path for deploying efficient LLMs.
Abstract
While large language models (LLMs) excel in many domains, their complexity and scale challenge deployment in resource-limited environments. Current compression techniques, such as parameter pruning, often fail to effectively utilize the knowledge from pruned parameters. To address these challenges, we propose Manifold-Based Knowledge Alignment and Layer Merging Compression (MKA), a novel approach that uses manifold learning and the Normalized Pairwise Information Bottleneck (NPIB) measure to merge similar layers, reducing model size while preserving essential performance. We evaluate MKA on multiple benchmark datasets and various LLMs. Our findings show that MKA not only preserves model performance but also achieves substantial compression ratios, outperforming traditional pruning methods. Moreover, when coupled with quantization, MKA delivers even greater compression. Specifically, on the MMLU dataset using the Llama3-8B model, MKA achieves a compression ratio of 43.75% with a minimal performance decrease of only 2.82\%. The proposed MKA method offers a resource-efficient and performance-preserving model compression technique for LLMs.
