Table of Contents
Fetching ...

Big2Small: A Unifying Neural Network Framework for Model Compression

Jing-Xiao Liao, Haoran Wang, Tao Li, Daoming Lyu, Yi Zhang, Chengjun Cai, Feng-Lei Fan

Abstract

With the development of foundational models, model compression has become a critical requirement. Various model compression approaches have been proposed such as low-rank decomposition, pruning, quantization, ergodic dynamic systems, and knowledge distillation, which are based on different heuristics. To elevate the field from fragmentation to a principled discipline, we construct a unifying mathematical framework for model compression grounded in measure theory. We further demonstrate that each model compression technique is mathematically equivalent to a neural network subject to a regularization. Building upon this mathematical and structural equivalence, we propose an experimentally-verified data-free model compression framework, termed \textit{Big2Small}, which translates Implicit Neural Representations (INRs) from data domain to the domain of network parameters. \textit{Big2Small} trains compact INRs to encode the weights of larger models and reconstruct the weights during inference. To enhance reconstruction fidelity, we introduce Outlier-Aware Preprocessing to handle extreme weight values and a Frequency-Aware Loss function to preserve high-frequency details. Experiments on image classification and segmentation demonstrate that \textit{Big2Small} achieves competitive accuracy and compression ratios compared to state-of-the-art baselines.

Big2Small: A Unifying Neural Network Framework for Model Compression

Abstract

With the development of foundational models, model compression has become a critical requirement. Various model compression approaches have been proposed such as low-rank decomposition, pruning, quantization, ergodic dynamic systems, and knowledge distillation, which are based on different heuristics. To elevate the field from fragmentation to a principled discipline, we construct a unifying mathematical framework for model compression grounded in measure theory. We further demonstrate that each model compression technique is mathematically equivalent to a neural network subject to a regularization. Building upon this mathematical and structural equivalence, we propose an experimentally-verified data-free model compression framework, termed \textit{Big2Small}, which translates Implicit Neural Representations (INRs) from data domain to the domain of network parameters. \textit{Big2Small} trains compact INRs to encode the weights of larger models and reconstruct the weights during inference. To enhance reconstruction fidelity, we introduce Outlier-Aware Preprocessing to handle extreme weight values and a Frequency-Aware Loss function to preserve high-frequency details. Experiments on image classification and segmentation demonstrate that \textit{Big2Small} achieves competitive accuracy and compression ratios compared to state-of-the-art baselines.

Paper Structure

This paper contains 30 sections, 2 theorems, 31 equations, 10 figures, 5 tables.

Key Result

Theorem 1

Let the original weight parameter set be $\Sigma$ with $\mathfrak{m}(\Sigma) > 0$. For any original weight $\theta^* \in \Sigma$ and error tolerance $\epsilon > 0$, there exists paired mapping functions $g: \Sigma \to \Sigma^{\dagger}$ and $g^{-1}: \Sigma^{\dagger} \to \hat{\Sigma}$, corresponding t

Figures (10)

  • Figure 1: The theoretical framework of Theorem 1.
  • Figure 2: The framework of Theorem 2.
  • Figure 3: An overview of the proposed Big2Small framework. It uses a "Compression–Decompression" architecture that encodes discrete weight parameter tensors with lightweight INRs and reconstructs the original weights at inference time.
  • Figure 4: The structure of INR.
  • Figure 5: Visualizing segmentation results of the original UNet models and the compressed ones.
  • ...and 5 more figures

Theorems & Definitions (9)

  • Definition 1: Parameter Space and Sets
  • Definition 2: Distance Function
  • Definition 3: Sets Size
  • Definition 4: Model Compression Mapping
  • Definition 5: Sets of Compression Methods
  • Theorem 1: Universal Compressibility
  • proof
  • Theorem 2: Structural Equivalence
  • proof