Table of Contents
Fetching ...

Collaborative Multi-Mode Pruning for Vision-Language Models

Zimeng Wu, Yunhong Wang, Donghao Wang, Jiaxin Chen

Abstract

Vision-Language Models (VLMs) have advanced rapidly within the unified Transformer architecture, yet their deployment on resource-constrained devices remains challenging due to high computational complexity. While pruning has emerged as an effective technique for compressing VLMs, existing approaches predominantly focus on a single mode by pruning either parameters or tokens, neglecting fully exploring the inherent redundancy in each mode, which leads to substantial performance degradation at high pruning ratios. To address the above limitations, we propose Collaborative Multi-Mode Pruning (CoMP), a novel framework tailored for VLMs by performing joint parameter and token pruning. Specifically, we first design a Collaborative Importance Metric (CIM) that investigates the mutual interference between the coupled parameters and tokens. It incorporates distinct significance of tokens into the computation of parameter importance scores, while simultaneously mitigating the affect of pruned parameters on token importance scores. Moreover, we develop a Multi-Mode Pruning Strategy (MPS) that decomposes the overall pruning process into a sequence of pruning stages, while in each stage we estimate the priory of different pruning modes based on their pruning cost and adaptively shift to the optimal one. Additionally, MPS integrates the historical cost and random exploration, in order to achieve a stable pruning process and avoid local optimum. Extensive experiments across various vision-language tasks and models demonstrate that our method effectively promotes the performance under high pruning ratios by comparing to the state-of-the-art approaches. The source code is available at https://github.com/Wuzimeng/CoMP.git.

Collaborative Multi-Mode Pruning for Vision-Language Models

Abstract

Vision-Language Models (VLMs) have advanced rapidly within the unified Transformer architecture, yet their deployment on resource-constrained devices remains challenging due to high computational complexity. While pruning has emerged as an effective technique for compressing VLMs, existing approaches predominantly focus on a single mode by pruning either parameters or tokens, neglecting fully exploring the inherent redundancy in each mode, which leads to substantial performance degradation at high pruning ratios. To address the above limitations, we propose Collaborative Multi-Mode Pruning (CoMP), a novel framework tailored for VLMs by performing joint parameter and token pruning. Specifically, we first design a Collaborative Importance Metric (CIM) that investigates the mutual interference between the coupled parameters and tokens. It incorporates distinct significance of tokens into the computation of parameter importance scores, while simultaneously mitigating the affect of pruned parameters on token importance scores. Moreover, we develop a Multi-Mode Pruning Strategy (MPS) that decomposes the overall pruning process into a sequence of pruning stages, while in each stage we estimate the priory of different pruning modes based on their pruning cost and adaptively shift to the optimal one. Additionally, MPS integrates the historical cost and random exploration, in order to achieve a stable pruning process and avoid local optimum. Extensive experiments across various vision-language tasks and models demonstrate that our method effectively promotes the performance under high pruning ratios by comparing to the state-of-the-art approaches. The source code is available at https://github.com/Wuzimeng/CoMP.git.

Paper Structure

This paper contains 61 sections, 12 equations, 11 figures, 15 tables.

Figures (11)

  • Figure 1: Illustration of different pruning modes for VLMs, with accuracy on NLVR2. For (i) parameter and (ii) token pruning, distinct modalities are simultaneously pruned under a unified ratio adjustment. For (iii) simple joint pruning, parameter and token pruning are conducted either sequentially or simultaneously without mitigating their inherent inconsistency. For (iv) our proposed CoMP, distinct pruning modes collaborate and only the optimal one is conducted at each stage in the progressive pruning process.
  • Figure 2: (a) At $\mathit{layer}_{10}$ of BLIP's vision encoder, tokens contributing most to parameter importance (blue marks) and those with top token importance are only slightly (lower than 30%) overlapped. (b) At $\mathit{layer}_2$ of BLIP's vision encoder, 75% of the least important parameters (red boxes) still highly influence token importance.
  • Figure 3: Framework overview of CoMP. (a) CoMP performs collaborative parameter and token pruning in nested loops. In the inner loop, input tokens are processed with partially masked parameters. The CIM module mitigates interference of progressive parameter pruning on token importance, and then suppresses the impact of redundant tokens for parameter importance. In the outer loop, the MPS module periodically selects the optimal pruning mode, whose corresponding threshold is adjusted to increase pruning ratio. (b) Given the full VLMs, CoMP compresses them by adaptively pruning parameters in different modalities, while enabling real-time token pruning during inference.
  • Figure 4: Illustration of the CIM module. (a) adopts token-weighted input norm for parameter importance. (b) applies parameter pruning mask to the attention weight matrix for token importance.
  • Figure 5: Illustration of interference between parameter pruning and token importance. (a) Without pruning, $token_1$ is more important than $token_2$. (b) Baseline pruning method masks redundant $head_1$ by \ref{['eq:mask_parameter']}, flattens softmax, and distorts ranks of token importance. (c) By masking with \ref{['eq:mask_attention']}, $head_1$ is gradually suppressed without disrupting correct ranks of token importance.
  • ...and 6 more figures