Table of Contents
Fetching ...

TailNLG: A Multilingual Benchmark Addressing Verbalization of Long-Tail Entities

Lia Draetta, Michael Oliverio, Virginia Ramón-Ferrer, Pier Felice Balestrucci, Flaviana Corallo, Carlos Badenes-Olmedo, Alessandro Mazzei, Marco Antonio Stranisci, Rossana Damiano

Abstract

The automatic verbalization of structured knowledge is a key task for making knowledge graphs accessible to non-expert users and supporting retrieval-augmented generation systems. Although recent advances in Data-to-Text generation have improved multilingual coverage, little attention has been paid to potential biases in the verbalization of rare entities, frequently known as long-tail entities. In this work, we present the first systematic study of long-tail entities in Data-to-Text generation. We introduce TailNLG, a new multilingual benchmark in English, Italian, and Spanish, built from Wikidata and covering entities with varying levels of popularity. We evaluate three different families of large language models in zero-shot settings and compare their performance on rare versus common entities, as well as against the established WebNLG benchmark. Our results reveal a consistent bias against long-tail entities: embedding-based scores are lower, and model uncertainty is higher for rare entities. We further show that the impact of long-tail entities varies across models and languages, and that existing evaluation metrics do not consistently capture these differences, highlighting the need for more reliable evaluation frameworks.

TailNLG: A Multilingual Benchmark Addressing Verbalization of Long-Tail Entities

Abstract

The automatic verbalization of structured knowledge is a key task for making knowledge graphs accessible to non-expert users and supporting retrieval-augmented generation systems. Although recent advances in Data-to-Text generation have improved multilingual coverage, little attention has been paid to potential biases in the verbalization of rare entities, frequently known as long-tail entities. In this work, we present the first systematic study of long-tail entities in Data-to-Text generation. We introduce TailNLG, a new multilingual benchmark in English, Italian, and Spanish, built from Wikidata and covering entities with varying levels of popularity. We evaluate three different families of large language models in zero-shot settings and compare their performance on rare versus common entities, as well as against the established WebNLG benchmark. Our results reveal a consistent bias against long-tail entities: embedding-based scores are lower, and model uncertainty is higher for rare entities. We further show that the impact of long-tail entities varies across models and languages, and that existing evaluation metrics do not consistently capture these differences, highlighting the need for more reliable evaluation frameworks.

Paper Structure

This paper contains 32 sections, 3 figures, 11 tables.

Figures (3)

  • Figure 1: Distribution of entities by claims in the Wikidata category Artist. A small number of entities are associated with many claims, while the majority (long-tail) are associated with few relations. The red dashed line indicates the Pareto cut-off, while the green dashed line marks the head–tail threshold
  • Figure 2: PPL distributions for head and long-tail entities across models (log10 scale). Boxes represent the interquartile range (IQR, Q1–Q3) with the median shown as a line inside the box. Statistical significance was assessed using the Mann-Whitney U test: *** p < 0.001, ns = not significant.
  • Figure 3: Examples of Chain, sibling and mixed triples configuration