Table of Contents
Fetching ...

M2Lingual: Enhancing Multilingual, Multi-Turn Instruction Alignment in Large Language Models

Rishabh Maheshwary, Vikas Yadav, Hoang Nguyen, Khyati Mahajan, Sathwik Tejaswi Madhusudhan

TL;DR

M2Lingual presents the first fully synthetic, multi-turn multilingual instruction-finetuning dataset built with a two-step Evol taxonomy. Spanning 70 languages and 17+ NLP tasks, the dataset comprises roughly 182K IFT pairs and is designed to be scalable and cost-efficient, addressing limitations of human- or translation-based multilingual IFT approaches. Through seed selection from native-language sources, guided Evol enrichment, and multi-turn Evol expansion, the authors demonstrate consistent performance gains across multiple model families and sizes on MT-Bench, QA, MGSM, and other benchmarks, with notable improvements in low-resource languages. The work also includes thorough ablations, low-resource analyses, and content-moderation considerations, arguing for the viability and safety of large-scale synthetic multilingual instruction data in advancing multilingual NLP capabilities.

Abstract

Instruction finetuning (IFT) is critical for aligning Large Language Models (LLMs) to follow instructions. While many effective IFT datasets have been introduced recently, they predominantly focus on high-resource languages like English. To better align LLMs across a broad spectrum of languages and tasks, we propose a fully synthetic, novel taxonomy (Evol) guided Multilingual, Multi-turn instruction finetuning dataset, called M2Lingual. It is constructed by first selecting a diverse set of seed examples and then utilizing the proposed Evol taxonomy to convert these seeds into complex and challenging multi-turn instructions. We demonstrate the effectiveness of M2Lingual by training LLMs of varying sizes and showcasing the enhanced performance across a diverse set of languages. We contribute the 2 step Evol taxonomy with the guided generation code: https://github.com/ServiceNow/M2Lingual, as well as the first fully synthetic, general and task-oriented, multi-turn, multilingual dataset built with Evol - M2Lingual: https://huggingface.co/datasets/ServiceNow-AI/ M2Lingual - containing 182K total IFT pairs, covering 70 languages and 17+ NLP tasks.

M2Lingual: Enhancing Multilingual, Multi-Turn Instruction Alignment in Large Language Models

TL;DR

M2Lingual presents the first fully synthetic, multi-turn multilingual instruction-finetuning dataset built with a two-step Evol taxonomy. Spanning 70 languages and 17+ NLP tasks, the dataset comprises roughly 182K IFT pairs and is designed to be scalable and cost-efficient, addressing limitations of human- or translation-based multilingual IFT approaches. Through seed selection from native-language sources, guided Evol enrichment, and multi-turn Evol expansion, the authors demonstrate consistent performance gains across multiple model families and sizes on MT-Bench, QA, MGSM, and other benchmarks, with notable improvements in low-resource languages. The work also includes thorough ablations, low-resource analyses, and content-moderation considerations, arguing for the viability and safety of large-scale synthetic multilingual instruction data in advancing multilingual NLP capabilities.

Abstract

Instruction finetuning (IFT) is critical for aligning Large Language Models (LLMs) to follow instructions. While many effective IFT datasets have been introduced recently, they predominantly focus on high-resource languages like English. To better align LLMs across a broad spectrum of languages and tasks, we propose a fully synthetic, novel taxonomy (Evol) guided Multilingual, Multi-turn instruction finetuning dataset, called M2Lingual. It is constructed by first selecting a diverse set of seed examples and then utilizing the proposed Evol taxonomy to convert these seeds into complex and challenging multi-turn instructions. We demonstrate the effectiveness of M2Lingual by training LLMs of varying sizes and showcasing the enhanced performance across a diverse set of languages. We contribute the 2 step Evol taxonomy with the guided generation code: https://github.com/ServiceNow/M2Lingual, as well as the first fully synthetic, general and task-oriented, multi-turn, multilingual dataset built with Evol - M2Lingual: https://huggingface.co/datasets/ServiceNow-AI/ M2Lingual - containing 182K total IFT pairs, covering 70 languages and 17+ NLP tasks.

Paper Structure

This paper contains 41 sections, 5 figures, 41 tables.

Figures (5)

  • Figure 1: Walk-through for data synthesis of M2Lingual. Step 1 is seed selection. In Step 2 for each instruction corresponding task specific Evol prompt taxonomy is used for generating complex evoled instruction. Finally, in Step 3, multi-turn instruction are generated on Step 2 evoled instructions using multi-turn Evol prompt taxonomy.
  • Figure 2: Taxonomy of Evol prompt conditions applied towards creating M2Lingual. Part 1 includes Evol prompts for Aya seeds and Part 2 has multi-turn Evol prompts applied for creating conversation.
  • Figure 3: Performance vs seed size in data synthesis
  • Figure 4: Comparison between Aya, WildChat and M2Lingual language distribution.
  • Figure 5: Multiturn Prompt to GPT-4