Table of Contents
Fetching ...

Entanglement as Memory: Mechanistic Interpretability of Quantum Language Models

Nathan Roll

Abstract

Quantum language models have shown competitive performance on sequential tasks, yet whether trained quantum circuits exploit genuinely quantum resources -- or merely embed classical computation in quantum hardware -- remains unknown. Prior work has evaluated these models through endpoint metrics alone, without examining the memory strategies they actually learn internally. We introduce the first mechanistic interpretability study of quantum language models, combining causal gate ablation, entanglement tracking, and density-matrix interchange interventions on a controlled long-range dependency task. We find that single-qubit models are exactly classically simulable and converge to the same geometric strategy as matched classical baselines, while two-qubit models with entangling gates learn a representationally distinct strategy that encodes context in inter-qubit entanglement -- confirmed by three independent causal tests (p < 0.0001, d = 0.89). On real quantum hardware, only the classical geometric strategy survives device noise; the entanglement strategy degrades to chance. These findings open mechanistic interpretability as a tool for the science of quantum language models and reveal a noise-expressivity tradeoff governing which learned strategies survive deployment.

Entanglement as Memory: Mechanistic Interpretability of Quantum Language Models

Abstract

Quantum language models have shown competitive performance on sequential tasks, yet whether trained quantum circuits exploit genuinely quantum resources -- or merely embed classical computation in quantum hardware -- remains unknown. Prior work has evaluated these models through endpoint metrics alone, without examining the memory strategies they actually learn internally. We introduce the first mechanistic interpretability study of quantum language models, combining causal gate ablation, entanglement tracking, and density-matrix interchange interventions on a controlled long-range dependency task. We find that single-qubit models are exactly classically simulable and converge to the same geometric strategy as matched classical baselines, while two-qubit models with entangling gates learn a representationally distinct strategy that encodes context in inter-qubit entanglement -- confirmed by three independent causal tests (p < 0.0001, d = 0.89). On real quantum hardware, only the classical geometric strategy survives device noise; the entanglement strategy degrades to chance. These findings open mechanistic interpretability as a tool for the science of quantum language models and reveal a noise-expressivity tradeoff governing which learned strategies survive deployment.

Paper Structure

This paper contains 25 sections, 2 theorems, 1 equation, 4 figures, 5 tables.

Key Result

Theorem 1

In the shared-parameter architecture where $U(\theta, x) = R_z(\theta_3)R_y(\theta_2 + x)R_z(\theta_1)$ processes both context and distractor tokens: context encoding requires $\theta_2 \neq 0$ (so distinct inputs produce distinct Z-coordinates), while distractor invariance requires $\theta_2 = 0$ (

Figures (4)

  • Figure 1: Hemisphere separation distinguishes successful from failing quantum memory. Bloch sphere trajectories for 10-distractor sequences. Left: Shared-parameter model (MinimalQLM). Contexts A and B overlap after distractor tokens, causing misclassification (76% accuracy). Right: Decoupled model (DecoupledQLM). Contexts remain in opposite hemispheres throughout, achieving 100% generalization. A classical SO(3) baseline produces identical trajectories, confirming this mechanism is not inherently quantum.
  • Figure 2: Z-coordinate preservation is robust across 100 distractors. Bloch-sphere Z-coordinate for Context A and Context B across 0--100 distractors (DecoupledQLM). Hemisphere separation grows slightly with length because distractor rotations push states toward the poles. The shared model loses separation after $\sim$5 distractors.
  • Figure 3: Entanglement entropy diverges by context, providing a representationally distinct memory channel. Von Neumann entanglement entropy $S(\rho_0)$ per timestep for the trained two-qubit model with CNOT gates on 10-distractor sequences. Context A and B traces diverge after the context token (timestep 0) and maintain distinct entanglement levels across all distractors. Ablating the CNOT eliminates this divergence, collapsing both traces to zero entropy; the model then reverts to Z-preservation.
  • Figure 4: A sharp phase transition separates viable from collapsed geometric memory. Distractor rotation $\theta_2^{\mathrm{dist}}$ sweeps from 0 to $\pi$ (10 distractors). (a) Classification accuracy: robust up to $\theta_2^{\mathrm{dist}} \approx 0.83$ rad, then drops to chance; recovery at $\pi$ is period-2 recurrence. (b) Z-coordinate variance across timesteps; lower variance indicates more stable memory. (c) Heatmap of Bloch Z-coordinate per timestep (columns) at each $\theta_2^{\mathrm{dist}}$ (rows). Blue = northern hemisphere, red = southern.

Theorems & Definitions (2)

  • Theorem 1: Shared-Parameter Tradeoff
  • Corollary 2: Dequantization