Table of Contents
Fetching ...

Task Oriented In-Domain Data Augmentation

Xiao Liang, Xinyu Hu, Simiao Zuo, Yeyun Gong, Qiang Lou, Yi Liu, Shao-Lun Huang, Jian Jiao

TL;DR

TRAIT, a task-oriented in-domain data augmentation framework, is proposed and adapted to two domains: advertisement and math, which improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain.

Abstract

Large Language Models (LLMs) have shown superior performance in various applications and fields. To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data. However, existing approaches suffer from two major issues. First, in-domain data are scarce compared with general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy identifies and selects a large amount of in-domain data from general corpora, and thus significantly enriches domain knowledge in the continual pre-training data. The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. By training on such passages, the model aligns with the need of downstream applications. We adapt LLMs to two domains: advertisement and math. On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain.

Task Oriented In-Domain Data Augmentation

TL;DR

TRAIT, a task-oriented in-domain data augmentation framework, is proposed and adapted to two domains: advertisement and math, which improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain.

Abstract

Large Language Models (LLMs) have shown superior performance in various applications and fields. To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data. However, existing approaches suffer from two major issues. First, in-domain data are scarce compared with general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy identifies and selects a large amount of in-domain data from general corpora, and thus significantly enriches domain knowledge in the continual pre-training data. The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. By training on such passages, the model aligns with the need of downstream applications. We adapt LLMs to two domains: advertisement and math. On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain.

Paper Structure

This paper contains 24 sections, 2 equations, 18 figures, 5 tables.

Figures (18)

  • Figure 1: An example of a task-oriented synthetic passage on the ads domain. Left: two downstream tasks (Query Rewriting and Query-LandingPage Relevance) and inputs. Right: the structure of the generated passage, including two problem-specific paragraphs and an enlightenment paragraph.
  • Figure 2: An example of a task-oriented synthetic passage on the math domain. Left: the selected two tasks (GSM8k and SAT) with an example problem from each task. Right: the structure of the generated passage, including two problem-specific paragraphs and an enlightenment paragraph.
  • Figure 3: Visualization of samples from the general corpus, the original in-domain ads corpus, ads downstream tasks, and $\text{TRAIT}$ (including both selected in-domain data and synthetic passages). We use Spacy honnibal2017spacy (left) and Mistral-7B jiang2023mistral (right) for embedding, while using t-SNE van2008visualizing for visualization.
  • Figure 4: Left: The average winning rate of 4 ads generation tasks (AG, DG, TG and TR) during continual pre-training. Right: The average few-shot accuracy of all math tasks during continual pre-training.
  • Figure 5: An example of model degradation: the model, when continually pretrained on the original in-domain corpus, exhibits repetitive and nonsensical text generation, unlike its performance with the base model and TRAIT corpus, where this issue is absent.
  • ...and 13 more figures