Table of Contents
Fetching ...

Beyond Precision: Importance-Aware Recall for Factuality Evaluation in Long-Form LLM Generation

Nazanin Jafari, James Allan, Mohit Iyyer

Abstract

Evaluating the factuality of long-form output generated by large language models (LLMs) remains challenging, particularly when responses are open-ended and contain many fine-grained factual statements. Existing evaluation methods primarily focus on precision: they decompose a response into atomic claims and verify each claim against external knowledge sources such as Wikipedia. However, this overlooks an equally important dimension of factuality: recall, whether the generated response covers the relevant facts that should be included. We propose a comprehensive factuality evaluation framework that jointly measures precision and recall. Our method leverages external knowledge sources to construct reference facts and determine whether they are captured in generated text. We further introduce an importance-aware weighting scheme based on relevance and salience. Our analysis reveals that current LLMs perform substantially better on precision than on recall, suggesting that factual incompleteness remains a major limitation of long-form generation and that models are generally better at covering highly important facts than the full set of relevant facts.

Beyond Precision: Importance-Aware Recall for Factuality Evaluation in Long-Form LLM Generation

Abstract

Evaluating the factuality of long-form output generated by large language models (LLMs) remains challenging, particularly when responses are open-ended and contain many fine-grained factual statements. Existing evaluation methods primarily focus on precision: they decompose a response into atomic claims and verify each claim against external knowledge sources such as Wikipedia. However, this overlooks an equally important dimension of factuality: recall, whether the generated response covers the relevant facts that should be included. We propose a comprehensive factuality evaluation framework that jointly measures precision and recall. Our method leverages external knowledge sources to construct reference facts and determine whether they are captured in generated text. We further introduce an importance-aware weighting scheme based on relevance and salience. Our analysis reveals that current LLMs perform substantially better on precision than on recall, suggesting that factual incompleteness remains a major limitation of long-form generation and that models are generally better at covering highly important facts than the full set of relevant facts.

Paper Structure

This paper contains 25 sections, 13 equations, 2 figures, 5 tables.

Figures (2)

  • Figure 1: Percentage of claims labeled as supported, not supported, or contradicted across models and datasets.
  • Figure 2: Recall comparison for fact reference sets formed using combined importance scoring ($\alpha=\beta=1$) versus relevance-only($\alpha=1,\beta=0$) and salience-only scoring ($\alpha=0,\beta=1$). The first column in each panel reports recall for the combined score (Co), while the next two columns show differences relative to relevance-only ($\Delta$(Co-Sal)) and salience-only ($\Delta$(Co-Rel)) rankings.