Table of Contents
Fetching ...

FlatAttention: Dataflow and Fabric Collectives Co-Optimization for Large Attention-Based Model Inference on Tile-Based Accelerators

Chi Zhang, Luca Colagrande, Renzo Andri, Luca Benini

Abstract

Attention accounts for an increasingly dominant fraction of total computation during inference for mixture-of-experts (MoE) models, making efficient acceleration critical. Emerging domain-specific accelerators for large model inference are shifting toward chip-scale and wafer-scale tile-based architectures. Tiles contain large matrix and vector engines and are connected through on-chip interconnects, which support tile-to-tile traffic to reduce the tile-to-main-memory traffic bottleneck. Hence, dataflow management is crucial to achieve high utilization. We propose FlatAttention, a dataflow for modern attention variants on tile-based accelerators. FlatAttention minimizes expensive high-bandwidth memory (HBM) accesses by exploiting collective primitives integrated into the on-chip network fabric, achieving up to 92.3% utilization, 4.1x speedup over FlashAttention-3, and 16x lower HBM traffic. On a 32x32 tile configuration with peak performance comparable to NVIDIA GH200, FlatAttention generalizes across multiple attention variants, achieving an average of 86% utilization for compute-bound attentions and 78% HBM bandwidth utilization for memory-bound ones, resulting in an average 1.9x speedup over attention implementations on GH200. Finally, we evaluate end-to-end DeepSeek-v3 FP8 decoding with FlatAttention on a wafer-scale multi-die system, achieving a 1.9x improvement in system throughput and a 1.4x reduction in per-user token output latency, despite operating with 1.5x lower peak system performance compared to the state-of-the-art solution.

FlatAttention: Dataflow and Fabric Collectives Co-Optimization for Large Attention-Based Model Inference on Tile-Based Accelerators

Abstract

Attention accounts for an increasingly dominant fraction of total computation during inference for mixture-of-experts (MoE) models, making efficient acceleration critical. Emerging domain-specific accelerators for large model inference are shifting toward chip-scale and wafer-scale tile-based architectures. Tiles contain large matrix and vector engines and are connected through on-chip interconnects, which support tile-to-tile traffic to reduce the tile-to-main-memory traffic bottleneck. Hence, dataflow management is crucial to achieve high utilization. We propose FlatAttention, a dataflow for modern attention variants on tile-based accelerators. FlatAttention minimizes expensive high-bandwidth memory (HBM) accesses by exploiting collective primitives integrated into the on-chip network fabric, achieving up to 92.3% utilization, 4.1x speedup over FlashAttention-3, and 16x lower HBM traffic. On a 32x32 tile configuration with peak performance comparable to NVIDIA GH200, FlatAttention generalizes across multiple attention variants, achieving an average of 86% utilization for compute-bound attentions and 78% HBM bandwidth utilization for memory-bound ones, resulting in an average 1.9x speedup over attention implementations on GH200. Finally, we evaluate end-to-end DeepSeek-v3 FP8 decoding with FlatAttention on a wafer-scale multi-die system, achieving a 1.9x improvement in system throughput and a 1.4x reduction in per-user token output latency, despite operating with 1.5x lower peak system performance compared to the state-of-the-art solution.

Paper Structure

This paper contains 19 sections, 11 equations, 15 figures, 3 tables.

Figures (15)

  • Figure 1: (a) FLOP breakdown for LLM models during prefill (seq length) and decode (kv length) stages. (b) Roofline plot of FlashAttention-3 prefill and FlashMLA decode performance on Nvidia GH200 GPU. Evaluated with FP16 precision, varying head dimension and sequence length for prefill, while varying speculative length and KV cache length for decoding benchmarkresults .
  • Figure 2: (a) Tile-based many-PE architecture template (b) Row-wise multicast implementation with fabric-supported hardware collectives (HW) compared against two software-based collective implementations (SW.Tree and SW.Seq) (c) A wafer-scale multi-die system consisting of multiple tile-based many- accelerators, with a 2D-mesh interconnect topology.
  • Figure 3: (a) model architecture overview and schematic for (b) in prefill (c) in auto-regressive decoding as well as (d) in auto-regressive decoding.
  • Figure 4: (a) Parametric definition of FlatAttention. (b) Detailed FlatAttention dataflow. (c) Naive FlatAttention schedule. (d) Optimized asynchronous FlatAttention schedule.
  • Figure 5: (a) General SUMMA dataflow on tile-based accelerators for . DeepSeek-v3 workload distribution in (b) pipeline parallelism, (c) full expert parallelism, and (d) EP-PP hybrid parallelism. (e) Wafer-scale multi-chip system execution mode.
  • ...and 10 more figures