Table of Contents
Fetching ...

Fast Cross-Operator Optimization of Attention Dataflow

Haodong Chang, Hailiang Hu, Zhenrui Wang, Yu Gong, Rongjian Liang, Zhexiang Tang, Bo Yuan, Jiang Hu

Abstract

Attention is a fundamental computational kernel that accounts for the majority of the workload in transformer and LLM computing. Optimizing dataflow is crucial for enhancing both performance and energy efficiency in attention computation. This optimization involves a range of decisions, such as tiling, computation ordering and buffer management, and can be applied at both intra-operator and inter-operator levels, resulting in a highly complex decision space. We propose a new approach to cross-operator dataflow optimization. Its centerpiece is an analytical performance model that spans a large decision space and enables matrix-based encoding of multiple candidate solutions. Built on this foundation, a vast number of solutions can be evaluated rapidly, and with the aid of an effective pruning technique, the optimal solution can be identified through exhaustive enumeration. We refer to our method as MMEE (Matrix Multiplication Encoded Enumeration). The ability to efficiently enumerate a large design space allows MMEE to deliver higher-quality solutions at a substantially faster speed compared to prior approaches. The MMEE approach is evaluated across various test cases for different accelerator configurations. For energy-driven optimization, MMEE reduces energy consumption by 48%-50% and latency by 31%-69%, compared to state-of-the-art methods. For latency-driven optimization, MMEE achieves simultaneous reductions of 40%-50% in energy consumption and 40%-69% in latency, respectively. Additionally, MMEE is $64\times$ to $343\times$ faster than previous works.

Fast Cross-Operator Optimization of Attention Dataflow

Abstract

Attention is a fundamental computational kernel that accounts for the majority of the workload in transformer and LLM computing. Optimizing dataflow is crucial for enhancing both performance and energy efficiency in attention computation. This optimization involves a range of decisions, such as tiling, computation ordering and buffer management, and can be applied at both intra-operator and inter-operator levels, resulting in a highly complex decision space. We propose a new approach to cross-operator dataflow optimization. Its centerpiece is an analytical performance model that spans a large decision space and enables matrix-based encoding of multiple candidate solutions. Built on this foundation, a vast number of solutions can be evaluated rapidly, and with the aid of an effective pruning technique, the optimal solution can be identified through exhaustive enumeration. We refer to our method as MMEE (Matrix Multiplication Encoded Enumeration). The ability to efficiently enumerate a large design space allows MMEE to deliver higher-quality solutions at a substantially faster speed compared to prior approaches. The MMEE approach is evaluated across various test cases for different accelerator configurations. For energy-driven optimization, MMEE reduces energy consumption by 48%-50% and latency by 31%-69%, compared to state-of-the-art methods. For latency-driven optimization, MMEE achieves simultaneous reductions of 40%-50% in energy consumption and 40%-69% in latency, respectively. Additionally, MMEE is to faster than previous works.

Paper Structure

This paper contains 42 sections, 10 equations, 27 figures, 4 tables.

Figures (27)

  • Figure 1: Comparison of various cross-operator dataflow mappers.
  • Figure 2: Attention computation and accelerator architecture.
  • Figure 3: Fusion dataflow keeps intermediate results ($C$) on-chip, avoiding off-chip DRAM accesses.
  • Figure 4: Tiling and tiled fusion. (a) Each operator is 2×2 tiled. (b) One tile of intermediate result ($c1$) is kept on-chip instead of $C$, reducing buffer use from 4 tiles to 1. Half-filled square denotes partial sums.
  • Figure 5: The impact of tiling. Each compute stage represents the multiplication of a pair of tiles. (a) Buffer utilization and (b) DRAM access curves show buffer usage and off-chip traffic with tiling. (c) When the tile size is smaller than the PE array, the PE array is under-utilized.
  • ...and 22 more figures

Theorems & Definitions (1)

  • proof