One Thousand and One Pairs: A "novel" challenge for long-context language models
Marzena Karpinska, Katherine Thai, Kyle Lo, Tanya Goyal, Mohit Iyyer
TL;DR
NoCha introduces a 1,001 narrative minimal-pair dataset of 67 recently published English fiction titles to stress long-context reasoning in claim verification. Unlike surface-retrieval benchmarks, NoCha requires global synthesis across book-length narratives, revealing a substantial gap between human readers and current long-context models (GPT-4o at 55.8% accuracy; open-weight models near random). The study analyzes evidence scope, world-building complexity, and explanations, showing that model justifications are often flawed and retrieval-augmented approaches offer limited gains. By providing a scalable data-collection and evaluation methodology, NoCha offers a framework for evolving the benchmark and benchmarking future long-context systems in a more realistic setting.
Abstract
Synthetic long-context LLM benchmarks (e.g., "needle-in-the-haystack") test only surface-level retrieval capabilities, but how well can long-context LLMs retrieve, synthesize, and reason over information across book-length inputs? We address this question by creating NoCha, a dataset of 1,001 minimally different pairs of true and false claims about 67 recently-published English fictional books, written by human readers of those books. In contrast to existing long-context benchmarks, our annotators confirm that the largest share of pairs in NoCha require global reasoning over the entire book to verify. Our experiments show that while human readers easily perform this task, it is enormously challenging for all ten long-context LLMs that we evaluate: no open-weight model performs above random chance (despite their strong performance on synthetic benchmarks), while GPT-4o achieves the highest accuracy at 55.8%. Further analysis reveals that (1) on average, models perform much better on pairs that require only sentence-level retrieval vs. global reasoning; (2) model-generated explanations for their decisions are often inaccurate even for correctly-labeled claims; and (3) models perform substantially worse on speculative fiction books that contain extensive world-building. The methodology proposed in NoCha allows for the evolution of the benchmark dataset and the easy analysis of future models.
