Table of Contents
Fetching ...

Confidence Regulation Neurons in Language Models

Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song, Mrinmaya Sachan, Neel Nanda

TL;DR

This work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves and presents a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences.

Abstract

Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences.

Confidence Regulation Neurons in Language Models

TL;DR

This work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves and presents a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences.

Abstract

Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences.

Paper Structure

This paper contains 42 sections, 12 equations, 14 figures.

Figures (14)

  • Figure 1: Entropy and Prediction. We mean ablate final-layer neurons across 4000 tokens and measure the variation in the entropy of the model's output $P_\mathrm{model}$ against average change of model's prediction ($\mathrm{argmax}_xP_\mathrm{model}(x)$). We identify a set of neurons whose effect depends on LayerNorm (red points; metric described in \ref{['sec:en_mechanism']}), and which affect the model's confidence (quantified as entropy of $P_\mathrm{model}$) with minimal impact on the prediction.
  • Figure 2: Identifying and Analyzing Entropy Neurons. (a) Neurons in GPT-2 Small displayed by their weight norm and variance in logit attribution. Entropy neurons (red) have high norm and low logit variance. (b) Causal graph showing the total effect and direct effect (bypassing LayerNorm) of a neuron on the model's output. (c) Comparison of total and direct effects on model loss for entropy neurons and randomly selected neurons. (d) Singular values and cosine similarity between neuron output weights and singular vectors of $\mathbf{W}_\mathrm{U}$. (e) Entropy neurons (red) show significant LayerNorm-mediated effects and high projection onto the null space ($\rho$). (f) Relationship between $\rho$ and the LayerNorm-mediated effect in LLaMA2 7B. $\rho$ is computed with $k= 40 \approx 0.01 * d_\mathrm{model}$. Color represents absolute change in entropy upon ablation ($\Delta\mathrm{H}$).
  • Figure 3: Token Frequency Neurons in Pythia 410M. (a) $\mathrm{D}_\mathrm{KL}(P_\mathrm{freq} \| P_\mathrm{model})$ and Entropy are correlated negatively. (b) Scatter plot of neurons highlighting token frequency neurons (in green), with high effect on $\mathrm{D}_\mathrm{KL}(P_\mathrm{freq} \| P_\mathrm{model})$, significantly mediated by the token frequency direction. (c) Box plots showing substantial difference in total vs. direct effect in token frequency neurons.
  • Figure 4: Examples of Neuron Activity in Language Models. (a) Change in loss after ablation of entropy neuron 11.2378 in GPT-2 Small. Color indicates reciprocal rank ($\mathrm{RR}$) of the correct token prediction. (b) Activation of neuron 11.2378 on an example from the C4 Corpus. The neuron mitigates a loss spike at the token "Mes," after which the model predicts "otherapy." (c) Change in entropy and KL divergence on correct tokens ($\mathrm{RR} =1$) post ablation of neuron 23.417 in Pythia 410M. The neuron increases entropy and aligns the model’s output with the token frequency distribution.
  • Figure 5: Entropy Neurons on Induction. (a) Activations, entropy, and loss across duplicated 200-token input sequences. (b) The effect of clip mean-ablation of specific entropy neurons. Neuron 11.2378 shows the most significant impact, with up to a 70% reduction in entropy. (c) BOS ablation of induction heads: Upon the ablation of three induction heads in GPT-2 Small, the activation of entropy neuron 11.2378 decreases substantially.
  • ...and 9 more figures