A Three-Pronged Approach to Cross-Lingual Adaptation with Multilingual LLMs
Vaibhav Singh, Amrith Krishna, Karthika NJ, Ganesh Ramakrishnan
TL;DR
The paper investigates three cross-lingual adaptation strategies—Handholding, Masquerading, and Bridging—for adapting an English-centric Llama-2-7b-chat model to Bengali, Hindi, and Tamil under low-resource constraints. By framing slot filling and NER as text-to-text tasks and evaluating with ICL and PEFT, it demonstrates that Handholding (using English supervision) and Bridging (Hindi continual pre-training) yield the strongest improvements, while Masquerading offers limited benefit, particularly under PEFT. The combination of Handholding and Bridging achieves the best overall performance, highlighting the value of leveraging a predominant language and a related language to enrich multilingual representation. These findings have practical implications for deploying LLMs in underrepresented languages, suggesting targeted pre-training and cross-lingual prompting as effective strategies when resources are scarce.
Abstract
Low-resource languages, by its very definition, tend to be under represented in the pre-training corpora of Large Language Models. In this work, we investigate three low-resource cross-lingual approaches that enable an LLM adapt to tasks in previously unseen languages. Llama-2 is an LLM where Indic languages, among many other language families, contribute to less than $0.005\%$ of the total $2$ trillion token pre-training corpora. In this work, we experiment with the English-dominated Llama-2 for cross-lingual transfer to three Indic languages, Bengali, Hindi, and Tamil as target languages. We study three approaches for cross-lingual transfer, under ICL and fine-tuning. One, we find that adding additional supervisory signals via a dominant language in the LLM, leads to improvements, both under in-context learning and fine-tuning. Two, adapting the target languages to word reordering may be beneficial under ICL, but its impact diminishes with fine tuning. Finally, continued pre-training in one low-resource language can improve model performance for other related low-resource languages.
