Table of Contents
Fetching ...

Collapse-Free Prototype Readout Layer for Transformer Encoders

Giansalvo Cirrincione, Rahul Ranjeev Kumar

Abstract

DDCL-Attention is a prototype-based readout layer for transformer encoders that replaces simple pooling methods, such as mean pooling or class tokens, with a learned compression mechanism. It uses a small set of global prototype vectors and assigns tokens to them through soft probabilistic matching, producing compact token summaries at linear complexity in sequence length. The method offers three main advantages. First, it avoids prototype collapse through an exact decomposition of the training loss into a reconstruction term and a diversity term, ensuring that prototypes remain distinct. Second, its joint training with the encoder is shown to be stable under a practical timescale condition, using Tikhonov's singular perturbation theory and explicit learning-rate constraints. Third, the same framework supports three uses: a final readout layer, a differentiable codebook extending VQ-VAE, and a hierarchical document compressor. Experiments on four datasets confirm the theoretical predictions: the loss decomposition holds exactly, prototype separation grows as expected when the stability condition is met, and the codebook reaches full utilization, outperforming standard hard vector quantization. An additional study on orbital debris classification shows that the method also applies beyond standard NLP and vision tasks, including scientific tabular data.

Collapse-Free Prototype Readout Layer for Transformer Encoders

Abstract

DDCL-Attention is a prototype-based readout layer for transformer encoders that replaces simple pooling methods, such as mean pooling or class tokens, with a learned compression mechanism. It uses a small set of global prototype vectors and assigns tokens to them through soft probabilistic matching, producing compact token summaries at linear complexity in sequence length. The method offers three main advantages. First, it avoids prototype collapse through an exact decomposition of the training loss into a reconstruction term and a diversity term, ensuring that prototypes remain distinct. Second, its joint training with the encoder is shown to be stable under a practical timescale condition, using Tikhonov's singular perturbation theory and explicit learning-rate constraints. Third, the same framework supports three uses: a final readout layer, a differentiable codebook extending VQ-VAE, and a hierarchical document compressor. Experiments on four datasets confirm the theoretical predictions: the loss decomposition holds exactly, prototype separation grows as expected when the stability condition is met, and the codebook reaches full utilization, outperforming standard hard vector quantization. An additional study on orbital debris classification shows that the method also applies beyond standard NLP and vision tasks, including scientific tabular data.

Paper Structure

This paper contains 65 sections, 9 theorems, 33 equations, 6 figures, 7 tables, 1 algorithm.

Key Result

Proposition 1

Let $f_\theta$ be any differentiable encoder. For any $\theta$, $P$, $T>0$, the identity $\mathcal{L}_q(\theta,P) = L_{\mathrm{OLS}}(\theta,P) + V(\theta,P)$ holds exactly, with $V(\theta,P)\geq 0$. $\blacktriangleleft$$\blacktriangleleft$

Figures (6)

  • Figure 1: 2D PCA projection of space debris features ($K=4$, after 500 epochs). Left: coloured by true orbital regime. Right: coloured by DDCL-Attention prototype assignment. Stars mark prototype positions $\mathbf{p}_k$. The LEO/HEO overlap in the lower left region reflects the genuine orbital ambiguity between low-altitude circular and Molniya-type eccentric orbits in 2D; the full $m=5$ dimensional space resolves this via the eccentricity feature.
  • Figure 2: Exp 1: training dynamics for SST-2 (top), IMDB (middle), and 20 Newsgroups (bottom). Each row shows the loss decomposition $\mathcal{L}_q=L_{\mathrm{OLS}}+V$ (top left), variance term $V\geq 0$ (top centre), prototype separation $\mathcal{S}(P)$ (top right), assignment entropy $H(Q)$ (bottom left), clustering quality ACC/NMI/ARI (bottom centre), and phase portrait $(V/N, \mathcal{S}(P))$ (bottom right).
  • Figure 3: PCA projection of the 20 learned prototypes ($K=20$) on 20 Newsgroups after 15 epochs.
  • Figure 4: Exp 2: DDCL-Attention soft VQ on CIFAR-10 ($K=64$, 50 epochs). Left: training dynamics (loss decomposition, $V\geq 0$, codebook utilisation). Right: codebook utilisation over epochs for DDCL-Attention vs. hard VQ-VAE; DDCL-Attention achieves 100% from epoch 1 while VQ-VAE requires 26 epochs.
  • Figure 5: Exp 3: hierarchical DDCL-Attention on 20 Newsgroups ($K_1=32$, $K_2=20$, frozen BERT, 15 epochs). Left: training dynamics at both levels; $V^{(1)}\geq 0$ and $V^{(2)}\geq 0$ confirmed simultaneously at every epoch (Proposition \ref{['prop:hierarchical']}). Right: t-SNE projection of the level-2 document representations coloured by DDCL-Attention assignment.
  • ...and 1 more figures

Theorems & Definitions (23)

  • Definition 1: DDCL-Attention
  • Remark 1
  • Proposition 1: Decomposition universality
  • proof
  • Proposition 2: Encoder gradient
  • proof
  • Theorem 1: Time-scale separation
  • proof
  • Remark 2
  • Proposition 3: Local stability condition
  • ...and 13 more