Table of Contents
Fetching ...

Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training

Cunyang Wei, Siddharth Singh, Aishwarya Sarkar, Daniel Nichols, Tisha Patel, Aditya K. Ranjan, Sayan Ghosh, Ali Jannesari, Nathan R. Tallent, Abhinav Bhatele

Abstract

Graph neural networks (GNNs) are widely used for learning on graph datasets derived from various real-world scenarios. Learning from extremely large graphs requires distributed training, and mini-batching with sampling is a popular approach for parallelizing GNN training. Existing distributed mini-batch approaches have significant performance bottlenecks due to expensive sampling methods and limited scaling when using data parallelism. In this work, we present ScaleGNN, a 4D parallel framework for scalable mini-batch GNN training that combines communication-free distributed sampling, 3D parallel matrix multiplication (PMM), and data parallelism. ScaleGNN introduces a uniform vertex sampling algorithm, enabling each process (GPU device) to construct its local mini-batch, i.e., subgraph partitions without any inter-process communication. 3D PMM enables scaling mini-batch training to much larger GPU counts than vanilla data parallelism with significantly lower communication overheads. We also present additional optimizations to overlap sampling with training, reduce communication overhead by sending data in lower precision, kernel fusion, and communication-computation overlap. We evaluate ScaleGNN on five graph datasets and demonstrate strong scaling up to 2048 GPUs on Perlmutter, 2048 GCDs on Frontier, and 1024 GPUs on Tuolumne. On Perlmutter, ScaleGNN achieves 3.5x end-to-end training speedup over the SOTA baseline on ogbn-products.

Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training

Abstract

Graph neural networks (GNNs) are widely used for learning on graph datasets derived from various real-world scenarios. Learning from extremely large graphs requires distributed training, and mini-batching with sampling is a popular approach for parallelizing GNN training. Existing distributed mini-batch approaches have significant performance bottlenecks due to expensive sampling methods and limited scaling when using data parallelism. In this work, we present ScaleGNN, a 4D parallel framework for scalable mini-batch GNN training that combines communication-free distributed sampling, 3D parallel matrix multiplication (PMM), and data parallelism. ScaleGNN introduces a uniform vertex sampling algorithm, enabling each process (GPU device) to construct its local mini-batch, i.e., subgraph partitions without any inter-process communication. 3D PMM enables scaling mini-batch training to much larger GPU counts than vanilla data parallelism with significantly lower communication overheads. We also present additional optimizations to overlap sampling with training, reduce communication overhead by sending data in lower precision, kernel fusion, and communication-computation overlap. We evaluate ScaleGNN on five graph datasets and demonstrate strong scaling up to 2048 GPUs on Perlmutter, 2048 GCDs on Frontier, and 1024 GPUs on Tuolumne. On Perlmutter, ScaleGNN achieves 3.5x end-to-end training speedup over the SOTA baseline on ogbn-products.

Paper Structure

This paper contains 45 sections, 20 equations, 8 figures, 2 tables, 2 algorithms.

Figures (8)

  • Figure 1: Three families of sampling algorithms. (a) Node-wise sampling. (b) Layer-wise sampling. (c) Subgraph-based sampling.
  • Figure 2: Model architecture in ScaleGNN. Vertex features and the graph adjacency matrix enter an input projection (GEMM) that maps features to a uniform hidden dimension. The projected features then pass through GNN layers, each comprising a GCN convolution (SpMM aggregation followed by GEMM update), RMSNorm, ReLU, dropout, and a residual connection. An output head (GEMM) produces the final class logits.
  • Figure 3: ScaleGNN uniform vertex sampling. (Left) Uniform vertex sampling on the original graph. Selected vertices are shown in green. (Upper right) The full adjacency matrix with sampled rows and columns highlighted, and the induced subgraph adjacency retaining only edges between selected vertices. (Lower right) Distributed sampling: the adjacency matrix is partitioned across GPUs, each independently sampling its local shard.
  • Figure 4: 3D PMM forward pass in ScaleGNN with eight GPUs arranged in a $2{\times}2{\times}2$ grid ($X{\times}Y{\times}Z$). Left: the input projection multiplies the input feature shards (IN, on the $ZX$-plane) by weight shards ($W$, on the $XY$-plane) and an all-reduce along $Z$ produces the projected features ($F$, on the $XY$-plane). Center: SpMM aggregation multiplies adjacency shards ($A$, on the $ZX$-plane) by $F$, followed by an all-reduce along $X$ to obtain the aggregated features ($H$). Right: the GEMM update multiplies $H$ by weight shards ($W$, on the $YX$-plane), and an all-reduce along $Y$ yields the layer output.
  • Figure 5: Breakdown of epoch times on ogbn-products with a $2{\times}2{\times}2$ grid per data-parallel group (DP1: 8 GPUs; DP4: 32 GPUs) as each optimization is applied cumulatively.
  • ...and 3 more figures