Table of Contents
Fetching ...

ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models

Yash Akhauri, Ahmed F AbouElhamayed, Jordan Dotzel, Zhiru Zhang, Alexander M Rush, Safeen Huda, Mohamed S Abdelfattah

TL;DR

A novel predictor called ShadowLLM is developed, which can shadow the LLM behavior and enforce better sparsity patterns, resulting in over 15% improvement in end-to-end accuracy compared to prior methods.

Abstract

The high power consumption and latency-sensitive deployments of large language models (LLMs) have motivated efficiency techniques like quantization and sparsity. Contextual sparsity, where the sparsity pattern is input-dependent, is crucial in LLMs because the permanent removal of attention heads or neurons from LLMs can significantly degrade accuracy. Prior work has attempted to model contextual sparsity using neural networks trained to predict activation magnitudes, which can be used to dynamically prune structures with low predicted activation magnitude. In this paper, we look beyond magnitude-based pruning criteria to assess attention head and neuron importance in LLMs. We develop a novel predictor called ShadowLLM, which can shadow the LLM behavior and enforce better sparsity patterns, resulting in over 15% improvement in end-to-end accuracy compared to prior methods. In addition, ShadowLLM achieves up to a 20% speed-up over the state-of-the-art DejaVu framework. These enhancements are validated on Llama-2 and OPT models with up to 30 billion parameters. Our code is available at \href{https://github.com/abdelfattah-lab/shadow_llm/}{ShadowLLM}.

ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models

TL;DR

A novel predictor called ShadowLLM is developed, which can shadow the LLM behavior and enforce better sparsity patterns, resulting in over 15% improvement in end-to-end accuracy compared to prior methods.

Abstract

The high power consumption and latency-sensitive deployments of large language models (LLMs) have motivated efficiency techniques like quantization and sparsity. Contextual sparsity, where the sparsity pattern is input-dependent, is crucial in LLMs because the permanent removal of attention heads or neurons from LLMs can significantly degrade accuracy. Prior work has attempted to model contextual sparsity using neural networks trained to predict activation magnitudes, which can be used to dynamically prune structures with low predicted activation magnitude. In this paper, we look beyond magnitude-based pruning criteria to assess attention head and neuron importance in LLMs. We develop a novel predictor called ShadowLLM, which can shadow the LLM behavior and enforce better sparsity patterns, resulting in over 15% improvement in end-to-end accuracy compared to prior methods. In addition, ShadowLLM achieves up to a 20% speed-up over the state-of-the-art DejaVu framework. These enhancements are validated on Llama-2 and OPT models with up to 30 billion parameters. Our code is available at \href{https://github.com/abdelfattah-lab/shadow_llm/}{ShadowLLM}.

Paper Structure

This paper contains 16 sections, 3 equations, 21 figures, 4 tables.

Figures (21)

  • Figure 1: ShadowLLM uses more accurate pruning criteria and a simpler sparsity predictor compared to DejaVu. Its pruning criteria results in a stronger accuracy-sparsity trade-off (geomean) across seven downstream evaluation tasks, and its unified predictor improves the execution latency compared to the layerwise predictor of DejaVu.
  • Figure 2: Contextual sparsity prunes neurons and attention heads based on the context (input) itself. Training a predictor to dynamically predict the sparsity pattern dependent on the input tokens can improve model quality.
  • Figure 3: Heads with higher rank variance, calculated using GradNorm, indicate greater context dependence. This context dependence, or contextual sparsity, is most noticeable in the early and later layers of the OPT-1.3B model. We measured the variance in rank for each head across 5000 inputs in seven five-shot evaluation tasks.
  • Figure 4: (1) A single predictor to model the entire LLM improves model performance, while (2) utilizing gradient based information when evaluating pruning criteria for neurons improves model quality.
  • Figure 5: Head importance ranking ability of different sparsity predictors on 500 queries across 7 downstream tasks. A single predictor at the start of the transformer can accurately model the global relative head and neuron importance.
  • ...and 16 more figures