Table of Contents
Fetching ...

Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference

Zifan He, Rui Ma, Yizhou Sun, Jason Cong

Abstract

Modern large language models (LLMs) increasingly depends on efficient long-context processing and generation mechanisms, including sparse attention, retrieval-augmented generation (RAG), and compressed contextual memory, to support complex reasoning. We show that these optimizations can be unified into a four-step memory processing pipeline: Prepare Memory, Compute Relevancy, Retrieval, and Apply to Inference. Through systematic profiling, we identify a 22%-97% memory processing overhead in LLM inference and strong heterogeneity in its computational characteristics. Motivated by this insight, we argue that \textbf{heterogeneous systems} are well-suited to accelerate memory processing and thus end-to-end inference. We demonstrate this approach on a GPU-FPGA system by offloading sparse, irregular, and memory-bounded operations to FPGAs while retaining compute-intensive operations on GPUs. Evaluated on an AMD MI210 GPU and an Alveo U55C FPGA, our system is $1.04\sim2.2\times$ faster and requires $1.11\sim4.7\times$ less energy across multiple LLM inference optimizations than the GPU baseline (similar results hold on NVIDIA A100). These results establish heterogeneous systems as a practical direction for efficient LLM memory processing and inform future heterogeneous hardware design.

Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference

Abstract

Modern large language models (LLMs) increasingly depends on efficient long-context processing and generation mechanisms, including sparse attention, retrieval-augmented generation (RAG), and compressed contextual memory, to support complex reasoning. We show that these optimizations can be unified into a four-step memory processing pipeline: Prepare Memory, Compute Relevancy, Retrieval, and Apply to Inference. Through systematic profiling, we identify a 22%-97% memory processing overhead in LLM inference and strong heterogeneity in its computational characteristics. Motivated by this insight, we argue that \textbf{heterogeneous systems} are well-suited to accelerate memory processing and thus end-to-end inference. We demonstrate this approach on a GPU-FPGA system by offloading sparse, irregular, and memory-bounded operations to FPGAs while retaining compute-intensive operations on GPUs. Evaluated on an AMD MI210 GPU and an Alveo U55C FPGA, our system is faster and requires less energy across multiple LLM inference optimizations than the GPU baseline (similar results hold on NVIDIA A100). These results establish heterogeneous systems as a practical direction for efficient LLM memory processing and inform future heterogeneous hardware design.

Paper Structure

This paper contains 27 sections, 26 figures, 5 tables.

Figures (26)

  • Figure 1: The GPU-FPGA heterogeneous system (1 MI210 + 1 Alveo U55C) can provide $1.2-1.8\times$ speedup and $1.3-4.7\times$ energy cost reduction consistently over a wide range of long-context LLM inference optimizations. "SA-R" stands for SeerAttention-R and "DSA" stands for DeepSeek Attention.
  • Figure 2: Four-Step Memory Processing Pipeline in LLMs:Prepare Memory preprocesses and structures raw memory for efficient access; Compute Relevancy assigns relevance scores to memory entries with respect to the input query; Retrieval extracts the most relevant memory based on these scores; and Apply to Inference integrates retrieved content and input into intermediate outputs, used in the rest operations in LLMs to produce tokens.
  • Figure 3: Percentage of latency spent on memory processing for sparse attention methods. With 1M tokens, memory processing can take 22%--81% of the decoding time.
  • Figure 4: Percentage of latency on memory processing for RAG using the Wikipedia dump su2024dragin. For two-stage RAG, reranking is time consuming, leading to a high percentage at 500K and slow increment as document count grows.
  • Figure 5: Left: Percentage of latency on memory processing for parameterized memory (Titans/HMT, LaCT). Right: Percentage of latency on memory processing for MemAgent.
  • ...and 21 more figures

Theorems & Definitions (1)

  • Definition 3.1