Table of Contents
Fetching ...

Regularizing Attention Scores with Bootstrapping

Neo Christopher Chung, Maxim Laletin

Abstract

Vision transformers (ViT) rely on attention mechanism to weigh input features, and therefore attention scores have naturally been considered as explanations for its decision-making process. However, attention scores are almost always non-zero, resulting in noisy and diffused attention maps and limiting interpretability. Can we quantify uncertainty measures of attention scores and obtain regularized attention scores? To this end, we consider attention scores of ViT in a statistical framework where independent noise would lead to insignificant yet non-zero scores. Leveraging statistical learning techniques, we introduce the bootstrapping for attention scores which generates a baseline distribution of attention scores by resampling input features. Such a bootstrap distribution is then used to estimate significances and posterior probabilities of attention scores. In natural and medical images, the proposed \emph{Attention Regularization} approach demonstrates a straightforward removal of spurious attention arising from noise, drastically improving shrinkage and sparsity. Quantitative evaluations are conducted using both simulation and real-world datasets. Our study highlights bootstrapping as a practical regularization tool when using attention scores as explanations for ViT. Code available: https://github.com/ncchung/AttentionRegularization

Regularizing Attention Scores with Bootstrapping

Abstract

Vision transformers (ViT) rely on attention mechanism to weigh input features, and therefore attention scores have naturally been considered as explanations for its decision-making process. However, attention scores are almost always non-zero, resulting in noisy and diffused attention maps and limiting interpretability. Can we quantify uncertainty measures of attention scores and obtain regularized attention scores? To this end, we consider attention scores of ViT in a statistical framework where independent noise would lead to insignificant yet non-zero scores. Leveraging statistical learning techniques, we introduce the bootstrapping for attention scores which generates a baseline distribution of attention scores by resampling input features. Such a bootstrap distribution is then used to estimate significances and posterior probabilities of attention scores. In natural and medical images, the proposed \emph{Attention Regularization} approach demonstrates a straightforward removal of spurious attention arising from noise, drastically improving shrinkage and sparsity. Quantitative evaluations are conducted using both simulation and real-world datasets. Our study highlights bootstrapping as a practical regularization tool when using attention scores as explanations for ViT. Code available: https://github.com/ncchung/AttentionRegularization

Paper Structure

This paper contains 17 sections, 10 equations, 31 figures, 1 algorithm.

Figures (31)

  • Figure 1: Histogram of $z$-statistics from the observed attention score sample corresponding to one of the images we use in our analysis and from the derived bootstrap sample.
  • Figure 2: Example of a perturbed image (n02102040_821.JPEG) with the attention map before and after regularization via different shrinkage methods: $p$-thresholding and $l$-thresholding with thresholds set at the 10-th percentile for $p$-values and LFDR respectively.
  • Figure 3: Regularization efficiency in ROI for various images from all the categories of the Imagenette validation subset expressed in terms of: a) mean percentile of scores in ROI w.r.t. the whole image; b) percentage of non-zero attention scores in ROI. Each dot denotes the results of using a method for a perturbed image. Thresholding values $p_{\rm th} = 0.3$ and $l_{\rm th} = 0.3$. The blue dashed line indicates no regularization.
  • Figure 4: The suppression factor $D$ for different categories of the Imagenette validation subset and different shrinkage methods. Lower $D$ corresponds to better regularization.
  • Figure 5: Sensitivity vs. specificity curves for 5 random images from each of the categories of the Imagenette validation subset (denoted with different colors) regularized via $p$-thresholding (left) and $l$-thresholding (right) for 50 values of the corresponding threshold spanning logarithmically from $0$ to $1$. The dots correspond to the $\pi_0$-threshold value for each curve. The black solid line shows the median of all the curves.
  • ...and 26 more figures