Table of Contents
Fetching ...

Weight Tying Biases Token Embeddings Towards the Output Space

Antonio Lopardo, Avyukth Harish, Catherine Arnett, Akshat Gupta

Abstract

Weight tying, i.e. sharing parameters between input and output embedding matrices, is common practice in language model design, yet its impact on the learned embedding space remains poorly understood. In this paper, we show that tied embedding matrices align more closely with output (unembedding) matrices than with input embeddings of comparable untied models, indicating that the shared matrix is shaped primarily for output prediction rather than input representation. This unembedding bias arises because output gradients dominate early in training. Using tuned lens analysis, we show this negatively affects early-layer computations, which contribute less effectively to the residual stream. Scaling input gradients during training reduces this bias, providing causal evidence for the role of gradient imbalance. This is mechanistic evidence that weight tying optimizes the embedding matrix for output prediction, compromising its role in input representation. These results help explain why weight tying can harm performance at scale and have implications for training smaller LLMs, where the embedding matrix contributes substantially to total parameter count.

Weight Tying Biases Token Embeddings Towards the Output Space

Abstract

Weight tying, i.e. sharing parameters between input and output embedding matrices, is common practice in language model design, yet its impact on the learned embedding space remains poorly understood. In this paper, we show that tied embedding matrices align more closely with output (unembedding) matrices than with input embeddings of comparable untied models, indicating that the shared matrix is shaped primarily for output prediction rather than input representation. This unembedding bias arises because output gradients dominate early in training. Using tuned lens analysis, we show this negatively affects early-layer computations, which contribute less effectively to the residual stream. Scaling input gradients during training reduces this bias, providing causal evidence for the role of gradient imbalance. This is mechanistic evidence that weight tying optimizes the embedding matrix for output prediction, compromising its role in input representation. These results help explain why weight tying can harm performance at scale and have implications for training smaller LLMs, where the embedding matrix contributes substantially to total parameter count.

Paper Structure

This paper contains 39 sections, 2 equations, 9 figures, 8 tables.

Figures (9)

  • Figure 1: Per-token cosine similarity (after linear alignment) between the tied embedding matrix and the untied input (blue) and output (orange) matrices for two OLMo-1B training runs (tied and untied). Dashed lines indicate means. The tied matrix is substantially more aligned with the untied output (mean: 0.719) than the untied input (mean: 0.525).
  • Figure 2: Tuned lens KL divergence for tied vs untied OLMo-1B. Lower values indicate better alignment between a layer's representations and the output space. The clearest separation appears in the early layers, where the tied model shows higher KL divergence.
  • Figure 3: For OLMo-1B-0724 (untied). Top: cosine similarity to initial embeddings (step 0), measuring cumulative drift. Bottom: cosine similarity between consecutive checkpoints, measuring between checkpoints change rate.
  • Figure 4: Gradient flow to the shared embedding matrix in tied OLMo-1B during the first 1000 training steps. Left: L2 norm of gradients from the input embedding (blue) and output projection (orange) pathways on a log scale. Right: relative contribution of each pathway as a percentage of total gradient norm.
  • Figure 5: Norm-frequency relationship for OLMo-1B after 10k steps (20B tokens). Left: untied model's input (blue) and output (orange) matrices. Right: tied model's shared matrix (pink).
  • ...and 4 more figures