Table of Contents
Fetching ...

Stop Probing, Start Coding: Why Linear Probes and Sparse Autoencoders Fail at Compositional Generalisation

Vitória Barin Pacela, Shruti Joshi, Isabela Camacho, Simon Lacoste-Julien, David Klindt

Abstract

The linear representation hypothesis states that neural network activations encode high-level concepts as linear mixtures. However, under superposition, this encoding is a projection from a higher-dimensional concept space into a lower-dimensional activation space, and a linear decision boundary in the concept space need not remain linear after projection. In this setting, classical sparse coding methods with per-sample iterative inference leverage compressed sensing guarantees to recover latent factors. Sparse autoencoders (SAEs), on the other hand, amortise sparse inference into a fixed encoder, introducing a systematic gap. We show this amortisation gap persists across training set sizes, latent dimensions, and sparsity levels, causing SAEs to fail under out-of-distribution (OOD) compositional shifts. Through controlled experiments that decompose the failure, we identify dictionary learning -- not the inference procedure -- as the binding constraint: SAE-learned dictionaries point in substantially wrong directions, and replacing the encoder with per-sample FISTA on the same dictionary does not close the gap. An oracle baseline proves the problem is solvable with a good dictionary at all scales tested. Our results reframe the SAE failure as a dictionary learning challenge, not an amortisation problem, and point to scalable dictionary learning as the key open problem for sparse inference under superposition.

Stop Probing, Start Coding: Why Linear Probes and Sparse Autoencoders Fail at Compositional Generalisation

Abstract

The linear representation hypothesis states that neural network activations encode high-level concepts as linear mixtures. However, under superposition, this encoding is a projection from a higher-dimensional concept space into a lower-dimensional activation space, and a linear decision boundary in the concept space need not remain linear after projection. In this setting, classical sparse coding methods with per-sample iterative inference leverage compressed sensing guarantees to recover latent factors. Sparse autoencoders (SAEs), on the other hand, amortise sparse inference into a fixed encoder, introducing a systematic gap. We show this amortisation gap persists across training set sizes, latent dimensions, and sparsity levels, causing SAEs to fail under out-of-distribution (OOD) compositional shifts. Through controlled experiments that decompose the failure, we identify dictionary learning -- not the inference procedure -- as the binding constraint: SAE-learned dictionaries point in substantially wrong directions, and replacing the encoder with per-sample FISTA on the same dictionary does not close the gap. An oracle baseline proves the problem is solvable with a good dictionary at all scales tested. Our results reframe the SAE failure as a dictionary learning challenge, not an amortisation problem, and point to scalable dictionary learning as the key open problem for sparse inference under superposition.

Paper Structure

This paper contains 37 sections, 20 equations, 63 figures, 3 tables.

Figures (63)

  • Figure 1: Binary classification with $t = \mathbf{1}{z_1 > 0.5}$ (green: not explicit, purple: explicit). (a) When $d_z = d_y$, the linear decision boundary in latent space remains linear after mixing ${\mathbf{y}} = {\mathbf{W}}{\mathbf{z}}$. (b) When $d_z > d_y$ (overcompleteness) and $z$ sparse, we can project down into non-overlapping regions (i.e., compressed sensing is possible), but the decision boundary becomes nonlinear in activation space, making linear probes insufficient.
  • Figure 2: Compositional OOD split. Left: In-distribution (ID) training data covers support pairs $(z_1,z_2)$ and $(z_2,z_3)$ and the novel combination $(z_1,z_3)$ is held out for OOD evaluation. Right: Same split in activation space ${\mathbf{y}}={\mathbf{W}}{\mathbf{z}}$.
  • Figure 3: SAEs fail to recover latent variables under superposition, but sparse coding succeeds. Top left: Ground-truth latents ($d_z=3$, $k=2$); colors denote active-variable combinations. Top right: Activation space ${\mathbf{y}} = {\mathbf{W}}{\mathbf{z}}$ ($d_y=2$); factors overlap after projection. Bottom left: SAE reconstruction; planes are not recovered. Bottom right: Sparse coding reconstruction; latents are identified up to scaling.
  • Figure 4: Linear probes fail OOD under overcompleteness. Each column sets $t = z_i$. The linear classifier fits the ID decision boundary well, but the compression ${\mathbf{y}} = {\mathbf{A}}{\mathbf{z}}$ introduces nonlinearity that is only exposed OOD, causing catastrophic generalisation failure (columns 1, 3) or, even, poor ID accuracy (column 2).
  • Figure 5: The amortisation gap persists across undersampling ratios. Per-sample methods (FISTA) exhibit a sharp phase transition to near-perfect MCC once $\delta$ exceeds the compressed-sensing threshold; SAEs plateau at $0.2$--$0.5$ MCC regardless of $\delta$. Each panel shows a different $(d_z, k)$ combination.
  • ...and 58 more figures