Table of Contents
Fetching ...

Retrieval-style In-Context Learning for Few-shot Hierarchical Text Classification

Huiyao Chen, Yu Zhao, Zulong Chen, Mengjia Wang, Liangyue Li, Meishan Zhang, Min Zhang

TL;DR

This work tackles few-shot hierarchical text classification by marrying retrieval-augmented in-context learning with large language models. It introduces a label-aware, PLM-based indexer to build a retrieval database and employs an iterative, layer-wise inference strategy to predict multi-level HTC labels, guided by MLM, CLS, and a Divergent Contrastive Learning objective. Across WOS, DBpedia, and Patent, the retrieval-based approach demonstrates state-of-the-art performance, robustness to random seeds, and clear benefits from label descriptions, hard negative sampling, and prompt design. The framework also enables retrieval-assisted annotation and provides insights into the impact of hierarchy depth, LLM choice, and prompt structure on ICL efficiency and accuracy.

Abstract

Hierarchical text classification (HTC) is an important task with broad applications, while few-shot HTC has gained increasing interest recently. While in-context learning (ICL) with large language models (LLMs) has achieved significant success in few-shot learning, it is not as effective for HTC because of the expansive hierarchical label sets and extremely-ambiguous labels. In this work, we introduce the first ICL-based framework with LLM for few-shot HTC. We exploit a retrieval database to identify relevant demonstrations, and an iterative policy to manage multi-layer hierarchical labels. Particularly, we equip the retrieval database with HTC label-aware representations for the input texts, which is achieved by continual training on a pretrained language model with masked language modeling (MLM), layer-wise classification (CLS, specifically for HTC), and a novel divergent contrastive learning (DCL, mainly for adjacent semantically-similar labels) objective. Experimental results on three benchmark datasets demonstrate superior performance of our method, and we can achieve state-of-the-art results in few-shot HTC.

Retrieval-style In-Context Learning for Few-shot Hierarchical Text Classification

TL;DR

This work tackles few-shot hierarchical text classification by marrying retrieval-augmented in-context learning with large language models. It introduces a label-aware, PLM-based indexer to build a retrieval database and employs an iterative, layer-wise inference strategy to predict multi-level HTC labels, guided by MLM, CLS, and a Divergent Contrastive Learning objective. Across WOS, DBpedia, and Patent, the retrieval-based approach demonstrates state-of-the-art performance, robustness to random seeds, and clear benefits from label descriptions, hard negative sampling, and prompt design. The framework also enables retrieval-assisted annotation and provides insights into the impact of hierarchy depth, LLM choice, and prompt structure on ICL efficiency and accuracy.

Abstract

Hierarchical text classification (HTC) is an important task with broad applications, while few-shot HTC has gained increasing interest recently. While in-context learning (ICL) with large language models (LLMs) has achieved significant success in few-shot learning, it is not as effective for HTC because of the expansive hierarchical label sets and extremely-ambiguous labels. In this work, we introduce the first ICL-based framework with LLM for few-shot HTC. We exploit a retrieval database to identify relevant demonstrations, and an iterative policy to manage multi-layer hierarchical labels. Particularly, we equip the retrieval database with HTC label-aware representations for the input texts, which is achieved by continual training on a pretrained language model with masked language modeling (MLM), layer-wise classification (CLS, specifically for HTC), and a novel divergent contrastive learning (DCL, mainly for adjacent semantically-similar labels) objective. Experimental results on three benchmark datasets demonstrate superior performance of our method, and we can achieve state-of-the-art results in few-shot HTC.

Paper Structure

This paper contains 30 sections, 6 equations, 8 figures, 9 tables, 1 algorithm.

Figures (8)

  • Figure 1: The problems of ICL-based few-shot HTC and our solutions. MLM, CLS and DCL denote Mask Language Modeling, Layer-wise CLaSsification and Divergent Contrastive Learning, which are the three objectives for indexer training.
  • Figure 2: The architecture of retrieval-style in-context learning for HTC. The [P$_j$] term is a soft prompt template token to learn the $j$-th hierarchical layer label index representation.
  • Figure 3: Label description generation.
  • Figure 4: Results of different label text types in the 1-shot setting. Ori Label means the original leaf label text, Label Path means all text on the label path, and Label Desc means the label description text of LLM.
  • Figure 5: Results of different contrastive learning strategy on WOS dataset. The x-axis denotes the shot number Q and the y-axis denotes the F1 score.
  • ...and 3 more figures