Table of Contents
Fetching ...

One Model to Translate Them All? A Journey to Mount Doom for Multilingual Model Merging

Baban Gain, Asif Ekbal, Trilok Nath Singh

Abstract

Weight-space model merging combines independently fine-tuned models without accessing original training data, offering a practical alternative to joint training. While merging succeeds in multitask settings, its behavior in multilingual contexts remains poorly understood. We systematically study weight-space merging for multilingual machine translation by fully fine-tuning language model on large-scale bilingual corpora and evaluating standard merging strategies. Our experiments reveal that merging degrades performance, especially when target languages differ. To explain this failure, we analyze internal representations using span-conditioned neuron selectivity and layer-wise centered kernel alignment. We find that language-specific neurons concentrate in embedding layers and upper transformer blocks, while intermediate layers remain largely shared across languages. Critically, fine-tuning redistributes rather than sharpens language selectivity: neurons for supervised and related languages become less exclusive, while those for unsupervised languages grow more isolated. This redistribution increases representational divergence in higher layers that govern generation. These findings suggest that multilingual fine-tuning may reshape geometry in ways that reduce compatibility with standard weight-space merging assumptions. Our work thus provides an explanation for why merging fails in multilingual translation scenarios.

One Model to Translate Them All? A Journey to Mount Doom for Multilingual Model Merging

Abstract

Weight-space model merging combines independently fine-tuned models without accessing original training data, offering a practical alternative to joint training. While merging succeeds in multitask settings, its behavior in multilingual contexts remains poorly understood. We systematically study weight-space merging for multilingual machine translation by fully fine-tuning language model on large-scale bilingual corpora and evaluating standard merging strategies. Our experiments reveal that merging degrades performance, especially when target languages differ. To explain this failure, we analyze internal representations using span-conditioned neuron selectivity and layer-wise centered kernel alignment. We find that language-specific neurons concentrate in embedding layers and upper transformer blocks, while intermediate layers remain largely shared across languages. Critically, fine-tuning redistributes rather than sharpens language selectivity: neurons for supervised and related languages become less exclusive, while those for unsupervised languages grow more isolated. This redistribution increases representational divergence in higher layers that govern generation. These findings suggest that multilingual fine-tuning may reshape geometry in ways that reduce compatibility with standard weight-space merging assumptions. Our work thus provides an explanation for why merging fails in multilingual translation scenarios.

Paper Structure

This paper contains 29 sections, 9 equations, 6 figures, 7 tables.

Figures (6)

  • Figure 1: Model merging requires only one GPU during deployment whereas individually fine-tuned models needs one GPU per language pair
  • Figure 2: Neuron-level language selectivity averages for both translation directions. Top: English to XX. Bottom: XX to English.
  • Figure 3: Angle between the representations across models finetuned on En$\rightarrow$Indic.
  • Figure 4: Layer-wise neuron counts under target masking for Indic$\rightarrow$En.
  • Figure 5: Layer-wise neuron counts under source masking for En$\rightarrow$Indic.
  • ...and 1 more figures