Table of Contents
Fetching ...

Extracting and Steering Emotion Representations in Small Language Models: A Methodological Comparison

Jihoon Jeong

Abstract

Small language models (SLMs) in the 100M-10B parameter range increasingly power production systems, yet whether they possess the internal emotion representations recently discovered in frontier models remains unknown. We present the first comparative analysis of emotion vector extraction methods for SLMs, evaluating 9 models across 5 architectural families (GPT-2, Gemma, Qwen, Llama, Mistral) using 20 emotions and two extraction methods (generation-based and comprehension-based). Generation-based extraction produces statistically superior emotion separation (Mann-Whitney p = 0.007; Cohen's d = -107.5), with the advantage modulated by instruction tuning and architecture. Emotion representations localize at middle transformer layers (~50% depth), following a U-shaped curve that is architecture-invariant from 124M to 3B parameters. We validate these findings against representational anisotropy baselines across 4 models and confirm causal behavioral effects through steering experiments, independently verified by an external emotion classifier (92% success rate, 37/40 scenarios). Steering reveals three regimes -- surgical (coherent text transformation), repetitive collapse, and explosive (text degradation) -- quantified by perplexity ratios and separated by model architecture rather than scale. We document cross-lingual emotion entanglement in Qwen, where steering activates semantically aligned Chinese tokens that RLHF does not suppress, raising safety concerns for multilingual deployment. This work provides methodological guidelines for emotion research on open-weight models and contributes to the Model Medicine series by bridging external behavioral profiling with internal representational analysis.

Extracting and Steering Emotion Representations in Small Language Models: A Methodological Comparison

Abstract

Small language models (SLMs) in the 100M-10B parameter range increasingly power production systems, yet whether they possess the internal emotion representations recently discovered in frontier models remains unknown. We present the first comparative analysis of emotion vector extraction methods for SLMs, evaluating 9 models across 5 architectural families (GPT-2, Gemma, Qwen, Llama, Mistral) using 20 emotions and two extraction methods (generation-based and comprehension-based). Generation-based extraction produces statistically superior emotion separation (Mann-Whitney p = 0.007; Cohen's d = -107.5), with the advantage modulated by instruction tuning and architecture. Emotion representations localize at middle transformer layers (~50% depth), following a U-shaped curve that is architecture-invariant from 124M to 3B parameters. We validate these findings against representational anisotropy baselines across 4 models and confirm causal behavioral effects through steering experiments, independently verified by an external emotion classifier (92% success rate, 37/40 scenarios). Steering reveals three regimes -- surgical (coherent text transformation), repetitive collapse, and explosive (text degradation) -- quantified by perplexity ratios and separated by model architecture rather than scale. We document cross-lingual emotion entanglement in Qwen, where steering activates semantically aligned Chinese tokens that RLHF does not suppress, raising safety concerns for multilingual deployment. This work provides methodological guidelines for emotion research on open-weight models and contributes to the Model Medicine series by bridging external behavioral profiling with internal representational analysis.

Paper Structure

This paper contains 31 sections, 4 figures, 5 tables.

Figures (4)

  • Figure 1: Emotion vector separation by layer depth (SmolLM2-1.7B-Instruct). The U-shaped curve shows emotion representations concentrate at intermediate depths ($\sim$50%), with early and late layers dominated by token-level and next-token-prediction features respectively. The dashed red line indicates the anisotropy baseline (0.808).
  • Figure 2: Extraction method $\times$ model type interaction. Generation (left) shows wide variance across models; comprehension (right) converges to a narrow band (0.59--0.67). All lines slope downward, confirming the universal generation advantage.
  • Figure 3: Dose-response curve for GPT-2 (Aggressive$\rightarrow$Calm). Left: internal activation deltas. Right: external classifier probabilities. Both measurements converge on the same behavioral flip point at strength 0.02 (orange dashed line).
  • Figure 4: Three steering regimes visualized by mean activation delta vs. perplexity ratio. Surgical models (green) maintain coherent output; repetitive collapse models (orange) produce predictable but degraded text; explosive models (red) show high perplexity and text incoherence.