Table of Contents
Fetching ...

Training-Free Exponential Context Extension via Cascading KV Cache

Jeffrey Willette, Heejun Lee, Youngwan Lee, Myeongjae Jeon, Sung Ju Hwang

TL;DR

The paper tackles the quadratic attention bottleneck hindering long-context LLM deployment by introducing a training-free Cascading KV Cache that partitions the fixed KV cache into cascading sub-caches with varying acceptance rates and EMA-driven token selection. Coupled with a strided prefill strategy and efficient circular-buffer implementation, the method achieves near-linear inference while extending effective context far beyond the cache size. Empirical results across PG19 perplexity, BookSum, passkey retrieval, and LongBench show meaningful improvements in accuracy and latency, including strong gains after multiple doublings of context length and substantial latency reductions against Flash Attention 2. The approach offers a practical path to real-time long-context generation in resource-constrained environments without retraining, enabling scalable, streaming LLM applications.

Abstract

The transformer's context window is vital for tasks such as few-shot learning and conditional generation as it preserves previous tokens for active memory. However, as the context lengths increase, the computational costs grow quadratically, hindering the deployment of large language models (LLMs) in real-world, long sequence scenarios. Although some recent key-value caching (KV Cache) methods offer linear inference complexity, they naively manage the stored context, prematurely evicting tokens and losing valuable information. Moreover, they lack an optimized prefill/prompt stage strategy, resulting in higher latency than even quadratic attention for realistic context sizes. In response, we introduce a novel mechanism that leverages cascading sub-cache buffers to selectively retain the most relevant tokens, enabling the model to maintain longer context histories without increasing the cache size. Our approach outperforms linear caching baselines across key benchmarks, including streaming perplexity, question answering, book summarization, and passkey retrieval, where it retains better retrieval accuracy at 1M tokens after four doublings of the cache size of 65K. Additionally, our method reduces prefill stage latency by a factor of 6.8 when compared to flash attention on 1M tokens. These innovations not only enhance the computational efficiency of LLMs but also pave the way for their effective deployment in resource-constrained environments, enabling large-scale, real-time applications with significantly reduced latency.

Training-Free Exponential Context Extension via Cascading KV Cache

TL;DR

The paper tackles the quadratic attention bottleneck hindering long-context LLM deployment by introducing a training-free Cascading KV Cache that partitions the fixed KV cache into cascading sub-caches with varying acceptance rates and EMA-driven token selection. Coupled with a strided prefill strategy and efficient circular-buffer implementation, the method achieves near-linear inference while extending effective context far beyond the cache size. Empirical results across PG19 perplexity, BookSum, passkey retrieval, and LongBench show meaningful improvements in accuracy and latency, including strong gains after multiple doublings of context length and substantial latency reductions against Flash Attention 2. The approach offers a practical path to real-time long-context generation in resource-constrained environments without retraining, enabling scalable, streaming LLM applications.

Abstract

The transformer's context window is vital for tasks such as few-shot learning and conditional generation as it preserves previous tokens for active memory. However, as the context lengths increase, the computational costs grow quadratically, hindering the deployment of large language models (LLMs) in real-world, long sequence scenarios. Although some recent key-value caching (KV Cache) methods offer linear inference complexity, they naively manage the stored context, prematurely evicting tokens and losing valuable information. Moreover, they lack an optimized prefill/prompt stage strategy, resulting in higher latency than even quadratic attention for realistic context sizes. In response, we introduce a novel mechanism that leverages cascading sub-cache buffers to selectively retain the most relevant tokens, enabling the model to maintain longer context histories without increasing the cache size. Our approach outperforms linear caching baselines across key benchmarks, including streaming perplexity, question answering, book summarization, and passkey retrieval, where it retains better retrieval accuracy at 1M tokens after four doublings of the cache size of 65K. Additionally, our method reduces prefill stage latency by a factor of 6.8 when compared to flash attention on 1M tokens. These innovations not only enhance the computational efficiency of LLMs but also pave the way for their effective deployment in resource-constrained environments, enabling large-scale, real-time applications with significantly reduced latency.

Paper Structure

This paper contains 19 sections, 4 equations, 19 figures, 10 tables, 3 algorithms.

Figures (19)

  • Figure 1: Attention matrices from Streaming LLM sink and Cascading KV Cache (Ours), both with the same total cache size.
  • Figure 2: Passkey accuracy up to 1M tokens given a cache size of 65K. Our Cascading cache maintains higher accuracy even after four doublings of the context length.
  • Figure 3: Comparison of Streaming LLM sink and Cascading Cache (Ours). Top: Streaming LLM stores fixed sink tokens (red) along with a sliding window of $N$ recent tokens. Bottom: Our method segments the cache into smaller cascading sub-caches, where each successive sub-cache conditionally accepts a fraction of tokens based on the magnitude of past attention scores. This simple technique allows for important tokens to remain in the cache for a longer time instead of being naively evicted too early. Conversely, superfluous tokens may be evicted before reaching the end of the cache, allowing for an intelligent eviction strategy.
  • Figure 4: Top: Each successive sub-cache window accepts a fraction of tokens evicted from the previous sub-cache. Bottom: At the boundaries between sub-caches, there are four possible cases where our method takes a different conditional action, creating a dynamic attention pattern. Circular buffers are not depicted for simplicity of visualization.
  • Figure 5: Our strided prefill. We first compute attention for a chunk (stride) of new queries and new + cached keys, forming a rectangular slice of the attention matrix at each step.
  • ...and 14 more figures