Table of Contents
Fetching ...

Post-hoc Self-explanation of CNNs

Ahcène Boubekki, Line H. Clemmensen

Abstract

Although standard Convolutional Neural Networks (CNNs) can be mathematically reinterpreted as Self-Explainable Models (SEMs), their built-in prototypes do not on their own accurately represent the data. Replacing the final linear layer with a $k$-means-based classifier addresses this limitation without compromising performance. This work introduces a common formalization of $k$-means-based post-hoc explanations for the classifier, the encoder's final output (B4), and combinations of intermediate feature activations. The latter approach leverages the spatial consistency of convolutional receptive fields to generate concept-based explanation maps, which are supported by gradient-free feature attribution maps. Empirical evaluation with a ResNet34 shows that using shallower, less compressed feature activations, such as those from the last three blocks (B234), results in a trade-off between semantic fidelity and a slight reduction in predictive performance.

Post-hoc Self-explanation of CNNs

Abstract

Although standard Convolutional Neural Networks (CNNs) can be mathematically reinterpreted as Self-Explainable Models (SEMs), their built-in prototypes do not on their own accurately represent the data. Replacing the final linear layer with a -means-based classifier addresses this limitation without compromising performance. This work introduces a common formalization of -means-based post-hoc explanations for the classifier, the encoder's final output (B4), and combinations of intermediate feature activations. The latter approach leverages the spatial consistency of convolutional receptive fields to generate concept-based explanation maps, which are supported by gradient-free feature attribution maps. Empirical evaluation with a ResNet34 shows that using shallower, less compressed feature activations, such as those from the last three blocks (B234), results in a trade-off between semantic fidelity and a slight reduction in predictive performance.

Paper Structure

This paper contains 17 sections, 1 theorem, 11 equations, 2 figures, 2 tables.

Key Result

Theorem 1

Convolutional neural network classifiers are self-explainable models with $C$ prototypes corresponding to the vector columns of the classifier.

Figures (2)

  • Figure 1: Interpretation process for B4 (left) and B234 (right) on a CUB-200 red-bellied woodpecker. Red and blue indicate higher and lower feature importance. Representative patches are the closest training examples to each prototype; border colors match the explanation map segments.
  • Figure 2: UMAP projection of training and test (lower opacity) embeddings from the twenty sparrow classes of CUB-200. Left: classifier prototypes $\mathbf{c}_j$ as crosses and rescaled to class norm as triangles. Right: KMEx prototypes.

Theorems & Definitions (2)

  • Theorem 1
  • proof