Table of Contents
Fetching ...

Exploring Silent Data Corruption as a Reliability Challenge in LLM Training

Anton Altenbernd, Philipp Wiesner, Odej Kao

Abstract

As Large Language Models (LLMs) scale in size and complexity, the consequences of failures during training become increasingly severe. A major challenge arises from Silent Data Corruption (SDC): hardware-induced faults that bypass system-level detection mechanisms. SDC may behave like benign numerical noise, but can also cause harmful gradient corruption that leads to loss spikes, divergence, or stalled progress. This work provides a controlled study of how intermittent SDC affects LLM pretraining. Using targeted fault injection at the level of GPU matrix-multiply instructions, we characterize the sensitivity of different bit positions, kernel functions, and execution stages. Our analysis shows that locally originating faults can produce impactful corruption, including NaN propagation, short-lived spikes in loss, gradient norm, and attention logits, as well as persistent parameter divergence. Building on the observed corruption signatures, we propose a lightweight detection method that identifies potentially harmful parameter updates. Experiments on LLaMA models with 60M, 350M, and 1.3B parameters demonstrate that recomputing the most recent training step upon detection can effectively mitigate the impact of these events.

Exploring Silent Data Corruption as a Reliability Challenge in LLM Training

Abstract

As Large Language Models (LLMs) scale in size and complexity, the consequences of failures during training become increasingly severe. A major challenge arises from Silent Data Corruption (SDC): hardware-induced faults that bypass system-level detection mechanisms. SDC may behave like benign numerical noise, but can also cause harmful gradient corruption that leads to loss spikes, divergence, or stalled progress. This work provides a controlled study of how intermittent SDC affects LLM pretraining. Using targeted fault injection at the level of GPU matrix-multiply instructions, we characterize the sensitivity of different bit positions, kernel functions, and execution stages. Our analysis shows that locally originating faults can produce impactful corruption, including NaN propagation, short-lived spikes in loss, gradient norm, and attention logits, as well as persistent parameter divergence. Building on the observed corruption signatures, we propose a lightweight detection method that identifies potentially harmful parameter updates. Experiments on LLaMA models with 60M, 350M, and 1.3B parameters demonstrate that recomputing the most recent training step upon detection can effectively mitigate the impact of these events.

Paper Structure

This paper contains 22 sections, 9 equations, 4 figures, 1 table.

Figures (4)

  • Figure 1: Evaluation loss, parameter difference, gradient norm before clipping, and maximum attention logits for different bit positions and kernels ($\text{FP}_i$ and $\text{BP}_i$, referring to the i-th GEMM kernel in the forward or backward pass, respectively). Evaluation loss and parameter difference represent the value observed after the training span. The gradient norm and the maximum attention logits represent the maximum values observed over the training span.
  • Figure 2: Training loss for fault-injection runs with and without gradient norm clipping, and for the corresponding baseline run (blue). With gradient norm clipping enabled (orange), injected faults create repeated loss spikes but training continues, illustrating that gradient norm clipping mitigates most harmful spikes. Without gradient norm clipping (green), a single corrupted gradient causes the optimizer’s second moment to become infinity, freezing parameter updates and stalling training.
  • Figure 3: Effect of a single fault injection with a fault length of three on the training loss and the parameter update magnitude $R_t$. The injected fault triggers a sharp spike in $R_t$, after which the training loss exhibits a bump before gradually stabilizing at a slightly elevated level. This example illustrates how gradient corruption propagates through the optimizer and leads to harmful parameter updates.
  • Figure 4: Effect of the detector sensitivity parameter $\alpha$ on recomputation behavior over 2,000 training steps for the 60M-parameter model. Results are shown per bit position and averaged across all bit positions (in black). Left: Recompute precision, indicating how often recomputation corresponds to true corruption. Increasing $\alpha$ raises the false alarm rate, leading to unnecessary recomputation. Center: Detection rate, showing increased sensitivity as $\alpha$ increases. Right: Evaluation loss, which exhibits an increase for smaller $\alpha$ relative to the baseline variation ($3.881 \pm 0.002$, after 2,000 steps).