Table of Contents
Fetching ...

LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

Sing Hieng Wong, Hassan Sajjad, A. B. Siddique

Abstract

Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequences surface these language-agnostic features, allowing LangFIR to filter them out and isolate a sparse set of language-specific features. We show that these features are extremely sparse, highly selective for their target language, and causally important: directional ablation increases cross-entropy loss only for the corresponding language. Using these features to construct steering vectors for multilingual generation control, LangFIR achieves the best average accuracy BLEU across three models (Gemma 3 1B, Gemma 3 4B, and Llama 3.1 8B), three datasets, and twelve target languages, outperforming the strongest monolingual baseline by up to and surpassing methods that rely on parallel data. Our results suggest that language identity in multilingual LLMs is localized in a sparse set of feature directions discoverable with monolingual data. Code is available at https://anonymous.4open.science/r/LangFIR-C0F5/.

LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

Abstract

Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequences surface these language-agnostic features, allowing LangFIR to filter them out and isolate a sparse set of language-specific features. We show that these features are extremely sparse, highly selective for their target language, and causally important: directional ablation increases cross-entropy loss only for the corresponding language. Using these features to construct steering vectors for multilingual generation control, LangFIR achieves the best average accuracy BLEU across three models (Gemma 3 1B, Gemma 3 4B, and Llama 3.1 8B), three datasets, and twelve target languages, outperforming the strongest monolingual baseline by up to and surpassing methods that rely on parallel data. Our results suggest that language identity in multilingual LLMs is localized in a sparse set of feature directions discoverable with monolingual data. Code is available at https://anonymous.4open.science/r/LangFIR-C0F5/.

Paper Structure

This paper contains 34 sections, 4 equations, 18 figures, 12 tables.

Figures (18)

  • Figure 1: Overview of LangFIR. (1) For each target-language sentence, a random-token sequence is generated by sampling from the tokenizer's vocabulary. (2) Residual-stream activations are extracted at a layer $l$ and encoded through SAE. (3) Sample-wise filtering is performed with activation frequency threshold $\tau$ to identify features that activate frequently for target-language sentences (language-consistent) and for random-token sequences (language-agnostic). (4) Removing language-agnostic features from language-consistent features yields a sparse set of language-specific features.
  • Figure 2: Analyses of language-specific feature identification across sample sizes on Llama 3.1 8B. (a) Overlap between language-consistent and language-agnostic feature sets approaches 100%. (b) Fewer than 5 language-specific features remain per language. Both quantities stabilize by ${\sim}$100 sample size.
  • Figure 3: Analyses of language-specific feature properties on Llama 3.1 8B. (a) Rank-1 features dominate in mean activation magnitude. (b) Language-specific features activate selectively on their target language and remain near zero on others.
  • Figure 4: Change in cross-entropy after ablating the top-2 language-specific features per layer on Llama 3.1 8B. Ablation increases loss selectively for the target language. English is a notable exception.
  • Figure 5: Ablation of number of top-$k$ language-specific features, sample size, and feature-overlap removal on Llama 3.1 8B. (a) Even $k{=}1$ achieves strong performance. (b) Steering metrics are stable with as few as 10 sentences. (c) Removing feature-overlap removal degrades all steering metrics dramatically.
  • ...and 13 more figures