Table of Contents
Fetching ...

Batch Loss Score for Dynamic Data Pruning

Qing Zhou, Bingxuan Zhao, Tao Yang, Hongyuan Zhang, Junyu Gao, Qi Wang

Abstract

Dynamic data pruning accelerates deep learning by selectively omitting less informative samples during training. While per-sample loss is a common importance metric, obtaining it can be challenging or infeasible for complex models or loss functions, often requiring significant implementation effort. This work proposes the Batch Loss Score (BLS), a computationally efficient alternative using an Exponential Moving Average (EMA) of readily available batch losses to assign scores to individual samples. We frame the batch loss, from the perspective of a single sample, as a noisy measurement of its scaled individual loss, with noise originating from stochastic batch composition. It is formally shown that the EMA mechanism functions as a first-order low-pass filter, attenuating high-frequency batch composition noise. This yields a score approximating the smoothed and persistent contribution of the individual sample to the loss, providing a theoretical grounding for BLS as a proxy for sample importance. BLS demonstrates remarkable code integration simplicity (\textbf{three-line injection}) and readily adapts existing per-sample loss-based methods (\textbf{one-line proxy}). Its effectiveness is demonstrated by enhancing two such methods to losslessly prune \textbf{20\%-50\%} of samples across \textit{14 datasets}, \textit{11 tasks} and \textit{18 models}, highlighting its utility and broad applicability, especially for complex scenarios where per-sample loss is difficult to access. Code is available at https://github.com/mrazhou/BLS.

Batch Loss Score for Dynamic Data Pruning

Abstract

Dynamic data pruning accelerates deep learning by selectively omitting less informative samples during training. While per-sample loss is a common importance metric, obtaining it can be challenging or infeasible for complex models or loss functions, often requiring significant implementation effort. This work proposes the Batch Loss Score (BLS), a computationally efficient alternative using an Exponential Moving Average (EMA) of readily available batch losses to assign scores to individual samples. We frame the batch loss, from the perspective of a single sample, as a noisy measurement of its scaled individual loss, with noise originating from stochastic batch composition. It is formally shown that the EMA mechanism functions as a first-order low-pass filter, attenuating high-frequency batch composition noise. This yields a score approximating the smoothed and persistent contribution of the individual sample to the loss, providing a theoretical grounding for BLS as a proxy for sample importance. BLS demonstrates remarkable code integration simplicity (\textbf{three-line injection}) and readily adapts existing per-sample loss-based methods (\textbf{one-line proxy}). Its effectiveness is demonstrated by enhancing two such methods to losslessly prune \textbf{20\%-50\%} of samples across \textit{14 datasets}, \textit{11 tasks} and \textit{18 models}, highlighting its utility and broad applicability, especially for complex scenarios where per-sample loss is difficult to access. Code is available at https://github.com/mrazhou/BLS.

Paper Structure

This paper contains 24 sections, 1 theorem, 10 equations, 4 figures, 9 tables.

Key Result

Proposition 4.4

The BLS score $s_i^{(k)}$ derived from eq:ema_indexed_final is the output of applying a first-order IIR low-pass filter $H_{\alpha}$ to the input sequence $\mathcal{L}_i = \mathcal{S}_i + \mathcal{N}_i$, plus a term decaying with the initial condition $s_i^{(0)}$: where $*$ denotes discrete convolution and $H_{\alpha}$ is the filter with impulse response $h[n] = (1-\alpha)\alpha^n u[n]$ ($u[n]$ b

Figures (4)

  • Figure 1: Score acquisition complexity (excluding scheduling logic). BLS: 3 line | InfoBatch: 33+ lines + intrusive modifications.
  • Figure 2: Empirical validation of frequency separation. Top: Average PSD for scaled signal ($\text{Mean } \mathcal{S}_i$) and batch noise ($\text{Mean } \mathcal{N}_i$). Bottom: Time series for a representative sample. $\mathcal{S}_i$ magnitude is much smaller (see inset).
  • Figure 3: Effect of EMA decay factor $\alpha$. Accuracy and Pruning Ratio are shown for ResNet18 and ResNet50.
  • Figure 4: Workflow Comparison: BLS Black-Box Simplicity vs. Per-Sample Loss Complexity.

Theorems & Definitions (6)

  • Definition 3.1: Dynamic Data Pruning
  • Definition 3.2: Batch Loss Score
  • Definition 4.1: Batch Composition Noise
  • Remark 4.3
  • Proposition 4.4: BLS Score as Low-Pass Filtered Estimate
  • proof