Table of Contents
Fetching ...

Imperative Interference: Social Register Shapes Instruction Topology in Large Language Models

Tony Mason

Abstract

System prompt instructions that cooperate in English compete in Spanish, with the same semantic content, but opposite interaction topology. We present instruction-level ablation experiments across four languages and four models showing that this topology inversion is mediated by social register: the imperative mood carries different obligatory force across speech communities, and models trained on multilingual data have learned these conventions. Declarative rewriting of a single instruction block reduces cross-linguistic variance by 81% (p = 0.029, permutation test). Rewriting three of eleven imperative blocks shifts Spanish instruction topology from competitive to cooperative, with spillover effects on unrewritten blocks. These findings suggest that models process instructions as social acts, not technical specifications: "NEVER do X" is an exercise of authority whose force is language-dependent, while "X: disabled" is a factual description that transfers across languages. If register mediates instruction-following at inference time, it plausibly does so during training. We state this as a testable prediction: constitutional AI principles authored in imperative mood may create language-dependent alignment. Corpus: 22 hand-authored probes against a production system prompt decomposed into 56 blocks.

Imperative Interference: Social Register Shapes Instruction Topology in Large Language Models

Abstract

System prompt instructions that cooperate in English compete in Spanish, with the same semantic content, but opposite interaction topology. We present instruction-level ablation experiments across four languages and four models showing that this topology inversion is mediated by social register: the imperative mood carries different obligatory force across speech communities, and models trained on multilingual data have learned these conventions. Declarative rewriting of a single instruction block reduces cross-linguistic variance by 81% (p = 0.029, permutation test). Rewriting three of eleven imperative blocks shifts Spanish instruction topology from competitive to cooperative, with spillover effects on unrewritten blocks. These findings suggest that models process instructions as social acts, not technical specifications: "NEVER do X" is an exercise of authority whose force is language-dependent, while "X: disabled" is a factual description that transfers across languages. If register mediates instruction-following at inference time, it plausibly does so during training. We state this as a testable prediction: constitutional AI principles authored in imperative mood may create language-dependent alignment. Corpus: 22 hand-authored probes against a production system prompt decomposed into 56 blocks.

Paper Structure

This paper contains 28 sections, 4 figures, 11 tables.

Figures (4)

  • Figure 1: Cross-linguistic baseline adherence. Three models perform best in English; Mistral performs worst in English and best in Mandarin; a French-trained model peaking in neither its training language nor English.
  • Figure 2: Instruction topology comparison (Haiku, Phase 0 main effects). English effects are uniformly negative (cooperative); Spanish effects are predominantly positive (competitive). The same instructions that strengthen the English prompt weaken the Spanish one.
  • Figure 3: Cross-linguistic variance of commit-restrictions by encoding variant and model. Declarative rewriting eliminates Haiku's cross-linguistic variance almost entirely. Gemini is unaffected; its failure is model-level, not encoding-dependent.
  • Figure 4: E-TOPO topology shift. Target probes (top) shift from competitive to cooperative as expected. Spillover probes (middle) also shift despite being unrewritten. Control probes (bottom) remain stable. The spillover demonstrates that register operates at the system level, not per-block.