Table of Contents
Fetching ...

Can we teach language models to gloss endangered languages?

Michael Ginn, Mans Hulden, Alexis Palmer

TL;DR

This work explores whether LLMs can be effective at the task of interlinear glossing with in-context learning, without any traditional training, and proposes new approaches for selecting examples to provide in-context, observing that targeted selection can significantly improve performance.

Abstract

Interlinear glossed text (IGT) is a popular format in language documentation projects, where each morpheme is labeled with a descriptive annotation. Automating the creation of interlinear glossed text would be desirable to reduce annotator effort and maintain consistency across annotated corpora. Prior research has explored a number of statistical and neural methods for automatically producing IGT. As large language models (LLMs) have showed promising results across multilingual tasks, even for rare, endangered languages, it is natural to wonder whether they can be utilized for the task of generating IGT. We explore whether LLMs can be effective at the task of interlinear glossing with in-context learning, without any traditional training. We propose new approaches for selecting examples to provide in-context, observing that targeted selection can significantly improve performance. We find that LLM-based methods beat standard transformer baselines, despite requiring no training at all. These approaches still underperform state-of-the-art supervised systems for the task, but are highly practical for researchers outside of the NLP community, requiring minimal effort to use.

Can we teach language models to gloss endangered languages?

TL;DR

This work explores whether LLMs can be effective at the task of interlinear glossing with in-context learning, without any traditional training, and proposes new approaches for selecting examples to provide in-context, observing that targeted selection can significantly improve performance.

Abstract

Interlinear glossed text (IGT) is a popular format in language documentation projects, where each morpheme is labeled with a descriptive annotation. Automating the creation of interlinear glossed text would be desirable to reduce annotator effort and maintain consistency across annotated corpora. Prior research has explored a number of statistical and neural methods for automatically producing IGT. As large language models (LLMs) have showed promising results across multilingual tasks, even for rare, endangered languages, it is natural to wonder whether they can be utilized for the task of generating IGT. We explore whether LLMs can be effective at the task of interlinear glossing with in-context learning, without any traditional training. We propose new approaches for selecting examples to provide in-context, observing that targeted selection can significantly improve performance. We find that LLM-based methods beat standard transformer baselines, despite requiring no training at all. These approaches still underperform state-of-the-art supervised systems for the task, but are highly practical for researchers outside of the NLP community, requiring minimal effort to use.

Paper Structure

This paper contains 34 sections, 3 equations, 13 figures, 3 tables.

Figures (13)

  • Figure 1: Accuracy of an LLM-based glossing method on Gitksan data, varying the number of provided examples and the strategy for selecting examples.
  • Figure 2: Gitksan
  • Figure 3: Lezgi
  • Figure 4: Natugu
  • Figure 5: Uspanteko
  • ...and 8 more figures