Table of Contents
Fetching ...

An Empirical Study of Many-Shot In-Context Learning for Machine Translation of Low-Resource Languages

Yinhan Lu, Gaganpreet Jhajj, Chen Zhang, Anietie Andy, David Ifeoluwa Adelani

Abstract

In-context learning (ICL) allows large language models (LLMs) to adapt to new tasks from a few examples, making it promising for languages underrepresented in pre-training. Recent work on many-shot ICL suggests that modern LLMs can further benefit from larger ICL examples enabled by their long context windows. However, such gains depend on careful example selection, and the inference cost can be prohibitive for low-resource language communities. In this paper, we present an empirical study of many-shot ICL for machine translation from English into ten truly low-resource languages recently added to FLORES+. We analyze the effects of retrieving more informative examples, using out-of-domain data, and ordering examples by length. Our findings show that many-shot ICL becomes more effective as the number of examples increases. More importantly, we show that BM25-based retrieval substantially improves data efficiency: 50 retrieved examples roughly match 250 many-shot examples, while 250 retrieved examples perform similarly to 1,000 many-shot examples.

An Empirical Study of Many-Shot In-Context Learning for Machine Translation of Low-Resource Languages

Abstract

In-context learning (ICL) allows large language models (LLMs) to adapt to new tasks from a few examples, making it promising for languages underrepresented in pre-training. Recent work on many-shot ICL suggests that modern LLMs can further benefit from larger ICL examples enabled by their long context windows. However, such gains depend on careful example selection, and the inference cost can be prohibitive for low-resource language communities. In this paper, we present an empirical study of many-shot ICL for machine translation from English into ten truly low-resource languages recently added to FLORES+. We analyze the effects of retrieving more informative examples, using out-of-domain data, and ordering examples by length. Our findings show that many-shot ICL becomes more effective as the number of examples increases. More importantly, we show that BM25-based retrieval substantially improves data efficiency: 50 retrieved examples roughly match 250 many-shot examples, while 250 retrieved examples perform similarly to 1,000 many-shot examples.

Paper Structure

This paper contains 24 sections, 3 figures, 14 tables.

Figures (3)

  • Figure 1: Per-language scaling curves (chrF++, random selection). Top two rows: eng$\rightarrow$X; bottom two rows: X$\rightarrow$eng.
  • Figure 2: Bible vs. in-domain examples (chrF++, eng$\rightarrow$X, Gemini 2.5 Flash). Bible examples plateau or degrade with more shots, while in-domain examples scale consistently.
  • Figure 3: Effect of example ordering on translation quality (chrF++, Gemini 2.5 Flash, averaged across Emakhuwa, Tamazight, Ladin, Mauritian Cr., and Sudanese Ar.). Short to Long and Long to Short sort examples by source length.