Table of Contents
Fetching ...

Positional Cognitive Specialization: Where Do LLMs Learn To Comprehend and Speak Your Language?

Luis Frentzen Salim, Lun-Wei Ku, Hsing-Kuo Kenneth Pao

Abstract

Adapting large language models (LLMs) to new languages is an expensive and opaque process. Understanding how language models acquire new languages and multilingual abilities is key to achieve efficient adaptation. Prior work on multilingual interpretability research focuses primarily on how trained models process multilingual instructions, leaving unexplored the mechanisms through which they acquire new languages during training. We investigate these training dynamics on decoder-only transformers through the lens of two functional cognitive specializations: language perception (input comprehension) and production (output generation). Through experiments on low-resource languages, we demonstrate how perceptual and productive specialization emerges in different regions of a language model by running layer ablation sweeps from the model's input and output directions. Based on the observed specialization patterns, we propose CogSym, a layer-wise heuristic that enables effective adaptation by exclusively fine-tuning a few early and late layers. We show that tuning only the 25% outermost layers achieves downstream task performance within 2-3% deviation from the full fine-tuning baseline. CogSym yields consistent performance with adapter methods such as LoRA, showcasing generalization beyond full fine-tuning. These findings provide insights to better understand how LLMs learn new languages and push toward accessible and inclusive language modeling.

Positional Cognitive Specialization: Where Do LLMs Learn To Comprehend and Speak Your Language?

Abstract

Adapting large language models (LLMs) to new languages is an expensive and opaque process. Understanding how language models acquire new languages and multilingual abilities is key to achieve efficient adaptation. Prior work on multilingual interpretability research focuses primarily on how trained models process multilingual instructions, leaving unexplored the mechanisms through which they acquire new languages during training. We investigate these training dynamics on decoder-only transformers through the lens of two functional cognitive specializations: language perception (input comprehension) and production (output generation). Through experiments on low-resource languages, we demonstrate how perceptual and productive specialization emerges in different regions of a language model by running layer ablation sweeps from the model's input and output directions. Based on the observed specialization patterns, we propose CogSym, a layer-wise heuristic that enables effective adaptation by exclusively fine-tuning a few early and late layers. We show that tuning only the 25% outermost layers achieves downstream task performance within 2-3% deviation from the full fine-tuning baseline. CogSym yields consistent performance with adapter methods such as LoRA, showcasing generalization beyond full fine-tuning. These findings provide insights to better understand how LLMs learn new languages and push toward accessible and inclusive language modeling.

Paper Structure

This paper contains 15 sections, 9 figures.

Figures (9)

  • Figure 1: Word-level translation task performance of each ablation sweep, namely front sweep (begs), rear sweep (ends), and both-ends or Cognitive-Symmetric (CogSym) sweep. Plots visualize emerging patterns as each region is expanded.
  • Figure 2: Downstream task performance of each ablation sweep
  • Figure 3: Comparison between each sweep strategy with regard to singular region budget $k$
  • Figure 4: 3D view of word-translation task plot with training steps as an additional axis for German--Javanese pair
  • Figure 5: Word translation task performance of 4-position variant with $k=8$
  • ...and 4 more figures