Table of Contents
Fetching ...

Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA

Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang, Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Fei Huang, Yongbin Li

TL;DR

Loong addresses a gap in long-context evaluation by presenting a realistic, multi-document QA benchmark that disperses evidence across 11+ documents in financial, legal, and academic domains. It introduces four evaluation tasks—Spotlight Locating, Comparison, Clustering, and Chain of Reasoning—across varied context lengths to probe cross-document reasoning. Extensive experiments across powerful LLMs show that even state-of-the-art models struggle, with RAG offering limited benefits, and scaling laws indicating that longer context windows require commensurately longer training. The results provide nuanced insights into long-context modeling, highlighting where current models fall short and guiding future improvements for robust multi-document understanding. Loong serves as a practical benchmark for evaluating real-world long-context capabilities and informs design choices for future LLM development and evaluation.

Abstract

Long-context modeling capabilities have garnered widespread attention, leading to the emergence of Large Language Models (LLMs) with ultra-context windows. Meanwhile, benchmarks for evaluating long-context LLMs are gradually catching up. However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). Unlike typical document QA, in Loong's test cases, each document is relevant to the final answer, ignoring any document will lead to the failure of the answer. Furthermore, Loong introduces four types of tasks with a range of context lengths: Spotlight Locating, Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Extensive experiments indicate that existing long-context language models still exhibit considerable potential for enhancement. Retrieval augmented generation (RAG) achieves poor performance, demonstrating that Loong can reliably assess the model's long-context modeling capabilities.

Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA

TL;DR

Loong addresses a gap in long-context evaluation by presenting a realistic, multi-document QA benchmark that disperses evidence across 11+ documents in financial, legal, and academic domains. It introduces four evaluation tasks—Spotlight Locating, Comparison, Clustering, and Chain of Reasoning—across varied context lengths to probe cross-document reasoning. Extensive experiments across powerful LLMs show that even state-of-the-art models struggle, with RAG offering limited benefits, and scaling laws indicating that longer context windows require commensurately longer training. The results provide nuanced insights into long-context modeling, highlighting where current models fall short and guiding future improvements for robust multi-document understanding. Loong serves as a practical benchmark for evaluating real-world long-context capabilities and informs design choices for future LLM development and evaluation.

Abstract

Long-context modeling capabilities have garnered widespread attention, leading to the emergence of Large Language Models (LLMs) with ultra-context windows. Meanwhile, benchmarks for evaluating long-context LLMs are gradually catching up. However, existing benchmarks employ irrelevant noise texts to artificially extend the length of test cases, diverging from the real-world scenarios of long-context applications. To bridge this gap, we propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA). Unlike typical document QA, in Loong's test cases, each document is relevant to the final answer, ignoring any document will lead to the failure of the answer. Furthermore, Loong introduces four types of tasks with a range of context lengths: Spotlight Locating, Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Extensive experiments indicate that existing long-context language models still exhibit considerable potential for enhancement. Retrieval augmented generation (RAG) achieves poor performance, demonstrating that Loong can reliably assess the model's long-context modeling capabilities.

Paper Structure

This paper contains 42 sections, 4 figures, 11 tables.

Figures (4)

  • Figure 1: Previous benchmarks vs. Loong. marks the existence of evidence related to the answer in that document. Compared to centralized distribution in previous ones, evidence in Loong are scattered in different parts across multi-document long contexts, necessitating that no document can be ignored for success.
  • Figure 2: Showcase of four evaluation tasks in Loong (<$\mathtt{di}$>...</$\mathtt{di}$> marks the content of the i-th document). a) Spotlight Locating: Locate the evidence. b) Comparison: Locate and compare the evidence. c) Clustering: Locate and cluster the evidence into groups. d) Chain of Reasoning: Locate and reasoning along a logical chain.
  • Figure 3: The results on all tasks after adding RAG module. We only represents the Avg Scores (0 100).
  • Figure 4: Test Case Length Distribution in Loong.