Table of Contents
Fetching ...

LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference

Zhongwei Wan, Ziang Wu, Che Liu, Jinfa Huang, Zhihong Zhu, Peng Jin, Longyue Wang, Li Yuan

TL;DR

Long-context multimodal LLMs suffer from oversized KV caches that slow decoding and increase memory use. The authors propose LOOK-M, a fine-tuning-free, plug-and-play KV-cache compression framework that uses a text-prior eviction strategy and various KV-pair merging schemes to reduce cache size while preserving multimodal context. Across MileBench and multiple backbones, LOOK-M achieves 80-95% cache reduction with 1.3x-1.5x faster decoding and often equal or improved task performance, surpassing text-based baselines. The approach demonstrates robustness across architectures and cache budgets, and paves the way for efficient long-context multimodal inference on diverse hardware. Future work notes potential gains from incorporating quantization, distillation, and efficient attention.

Abstract

Long-context Multimodal Large Language Models (MLLMs) demand substantial computational resources for inference as the growth of their multimodal Key-Value (KV) cache, in response to increasing input lengths, challenges memory and time efficiency. Unlike single-modality LLMs that manage only textual contexts, the KV cache of long-context MLLMs includes representations from multiple images with temporal and spatial relationships and related textual contexts. The predominance of image tokens means traditional optimizations for LLMs' KV caches are unsuitable for multimodal long-context settings, and no prior works have addressed this challenge. In this work, we introduce LOOK-M, a pioneering, fine-tuning-free approach that efficiently reduces the multimodal KV cache size while maintaining performance comparable to a full cache. We observe that during prompt prefill, the model prioritizes more textual attention over image features, and based on the multimodal interaction observation, a new proposed text-prior method is explored to compress the KV cache. Furthermore, to mitigate the degradation of image contextual information, we propose several compensatory strategies using KV pairs merging. LOOK-M demonstrates that with a significant reduction in KV Cache memory usage, such as reducing it by 80% in some cases, it not only achieves up to 1.5x faster decoding but also maintains or even enhances performance across a variety of long context multimodal tasks.

LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference

TL;DR

Long-context multimodal LLMs suffer from oversized KV caches that slow decoding and increase memory use. The authors propose LOOK-M, a fine-tuning-free, plug-and-play KV-cache compression framework that uses a text-prior eviction strategy and various KV-pair merging schemes to reduce cache size while preserving multimodal context. Across MileBench and multiple backbones, LOOK-M achieves 80-95% cache reduction with 1.3x-1.5x faster decoding and often equal or improved task performance, surpassing text-based baselines. The approach demonstrates robustness across architectures and cache budgets, and paves the way for efficient long-context multimodal inference on diverse hardware. Future work notes potential gains from incorporating quantization, distillation, and efficient attention.

Abstract

Long-context Multimodal Large Language Models (MLLMs) demand substantial computational resources for inference as the growth of their multimodal Key-Value (KV) cache, in response to increasing input lengths, challenges memory and time efficiency. Unlike single-modality LLMs that manage only textual contexts, the KV cache of long-context MLLMs includes representations from multiple images with temporal and spatial relationships and related textual contexts. The predominance of image tokens means traditional optimizations for LLMs' KV caches are unsuitable for multimodal long-context settings, and no prior works have addressed this challenge. In this work, we introduce LOOK-M, a pioneering, fine-tuning-free approach that efficiently reduces the multimodal KV cache size while maintaining performance comparable to a full cache. We observe that during prompt prefill, the model prioritizes more textual attention over image features, and based on the multimodal interaction observation, a new proposed text-prior method is explored to compress the KV cache. Furthermore, to mitigate the degradation of image contextual information, we propose several compensatory strategies using KV pairs merging. LOOK-M demonstrates that with a significant reduction in KV Cache memory usage, such as reducing it by 80% in some cases, it not only achieves up to 1.5x faster decoding but also maintains or even enhances performance across a variety of long context multimodal tasks.

Paper Structure

This paper contains 21 sections, 12 equations, 6 figures, 6 tables.

Figures (6)

  • Figure 1: A multimodal long-context sample contains multiple images from MileBench Song2024MileBenchBM showing comprehensive spatial relationships.
  • Figure 2: Visualization of attention in multimodal prompt encoding phase, where $\mathbf{X}^{T}$ represents a text sentence and $\mathbf{X}^{I}$ denotes a subsequent image, showcasing the interleaved input of text and images in multimodal long-context scenarios.
  • Figure 3: Pipeline of LOOK-M's KV cache optimization strategy. 'Prefill' denotes prompt encoding.
  • Figure 4: A simple similarity matrix example and Four merging strategies of LOOK-M: Averaged Merging, Pivotal Merging, and Weighted Merging.
  • Figure 5: Influence of Various Cache Budgets on Performance.
  • ...and 1 more figures