Table of Contents
Fetching ...

WSVD: Weighted Low-Rank Approximation for Fast and Efficient Execution of Low-Precision Vision-Language Models

Haiyu Wang, Yutong Wang, Jack Jiang, Sai Qian Zhang

Abstract

Singular Value Decomposition (SVD) has become an important technique for reducing the computational burden of Vision Language Models (VLMs), which play a central role in tasks such as image captioning and visual question answering. Although multiple prior works have proposed efficient SVD variants to enable low-rank operations, we find that in practice it remains difficult to achieve substantial latency reduction during model execution. To address this limitation, we introduce a new computational pattern and apply SVD at a finer granularity, enabling real and measurable improvements in execution latency. Furthermore, recognizing that weight elements differ in their relative importance, we adaptively allocate relative importance to each element during SVD process to better preserve accuracy, then extend this framework with quantization applied to both weights and activations, resulting in a highly efficient VLM. Collectively, we introduce~\textit{Weighted SVD} (WSVD), which outperforms other approaches by achieving over $1.8\times$ decoding speedup while preserving accuracy. We open source our code at: \href{https://github.com/SAI-Lab-NYU/WSVD}{\texttt{https://github.com/SAI-Lab-NYU/WSVD}

WSVD: Weighted Low-Rank Approximation for Fast and Efficient Execution of Low-Precision Vision-Language Models

Abstract

Singular Value Decomposition (SVD) has become an important technique for reducing the computational burden of Vision Language Models (VLMs), which play a central role in tasks such as image captioning and visual question answering. Although multiple prior works have proposed efficient SVD variants to enable low-rank operations, we find that in practice it remains difficult to achieve substantial latency reduction during model execution. To address this limitation, we introduce a new computational pattern and apply SVD at a finer granularity, enabling real and measurable improvements in execution latency. Furthermore, recognizing that weight elements differ in their relative importance, we adaptively allocate relative importance to each element during SVD process to better preserve accuracy, then extend this framework with quantization applied to both weights and activations, resulting in a highly efficient VLM. Collectively, we introduce~\textit{Weighted SVD} (WSVD), which outperforms other approaches by achieving over decoding speedup while preserving accuracy. We open source our code at: \href{https://github.com/SAI-Lab-NYU/WSVD}{\texttt{https://github.com/SAI-Lab-NYU/WSVD}

Paper Structure

This paper contains 31 sections, 10 equations, 5 figures, 15 tables, 1 algorithm.

Figures (5)

  • Figure 1: (a) Architecture of vision-language model. (b) Overview of WSVD framework.
  • Figure 2: (a) Latency evaluation of VLM including self-attention (SA) and feed-forward (FFN) modules. (b) Conventional SVD: the left side illustrates SVD of $W_k$, and the right side shows the reconstruction of $K_h$ from the shared latent. (c) Per-head SVD: the left side illustrates per-head SVD of $W_{Kh}$, and right side shows per-head reconstruction of $K_h$ from per-head latent.
  • Figure 3: (a) Naive reconstruction requires materializing and writing back full $K_h$ to VRAM (global GPU memory), leading to excessive memory usage and I/O. (b) Our fused kernel consumes $C_{Kh}$ and $C_{Vh}$ tiles on-chip with flash decoding, reducing both peak memory footprint and I/O traffic. All the step numbers are shown in circle.
  • Figure 4: WSVD decoding pipeline. Each token is down-projected to low-rank latents, and $K$ and $V$ latents are appended to the cache, while $Q$ latent is up-projected and consumed together with cached $C_{Kh}, C_{Vh}$ in the fused kernel.
  • Figure 5: Latency evaluation and normalized latency on: (a) RTX 4090 and (b) RTX 5090.