Table of Contents
Fetching ...

Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track

Ronak Pradeep, Nandan Thakur, Sahel Sharifymoghaddam, Eric Zhang, Ryan Nguyen, Daniel Campos, Nick Craswell, Jimmy Lin

TL;DR

The paper presents Ragnarök, an open-source, end-to-end RAG framework featuring a two-module pipeline (retrieval with reranking and augmented generation) designed to standardize development, evaluation, and visualization of RAG systems. It introduces the MS MARCO V2.1 document and segment collections created via deduplication and sliding-window chunking, along with two long-form topic collections (TREC-RAGgy 2024 and TREC-Researchy 2024) to stress aggregation and knowledge-intensive tasks. The authors provide baseline retrieval and generation configurations using BM25, RankZephyr, GPT-4o, and Cohere Command R+, and offer a RAG-Bench style evaluation plus a web-based Ragnarök System Arena for pairwise comparisons. By open-sourcing the framework and datasets, the work aims to establish a reusable, extensible standard for future RAG research and competitions.

Abstract

Did you try out the new Bing Search? Or maybe you fiddled around with Google AI~Overviews? These might sound familiar because the modern-day search stack has recently evolved to include retrieval-augmented generation (RAG) systems. They allow searching and incorporating real-time data into large language models (LLMs) to provide a well-informed, attributed, concise summary in contrast to the traditional search paradigm that relies on displaying a ranked list of documents. Therefore, given these recent advancements, it is crucial to have an arena to build, test, visualize, and systematically evaluate RAG-based search systems. With this in mind, we propose the TREC 2024 RAG Track to foster innovation in evaluating RAG systems. In our work, we lay out the steps we've made towards making this track a reality -- we describe the details of our reusable framework, Ragnarök, explain the curation of the new MS MARCO V2.1 collection choice, release the development topics for the track, and standardize the I/O definitions which assist the end user. Next, using Ragnarök, we identify and provide key industrial baselines such as OpenAI's GPT-4o or Cohere's Command R+. Further, we introduce a web-based user interface for an interactive arena allowing benchmarking pairwise RAG systems by crowdsourcing. We open-source our Ragnarök framework and baselines to achieve a unified standard for future RAG systems.

Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track

TL;DR

The paper presents Ragnarök, an open-source, end-to-end RAG framework featuring a two-module pipeline (retrieval with reranking and augmented generation) designed to standardize development, evaluation, and visualization of RAG systems. It introduces the MS MARCO V2.1 document and segment collections created via deduplication and sliding-window chunking, along with two long-form topic collections (TREC-RAGgy 2024 and TREC-Researchy 2024) to stress aggregation and knowledge-intensive tasks. The authors provide baseline retrieval and generation configurations using BM25, RankZephyr, GPT-4o, and Cohere Command R+, and offer a RAG-Bench style evaluation plus a web-based Ragnarök System Arena for pairwise comparisons. By open-sourcing the framework and datasets, the work aims to establish a reusable, extensible standard for future RAG research and competitions.

Abstract

Did you try out the new Bing Search? Or maybe you fiddled around with Google AI~Overviews? These might sound familiar because the modern-day search stack has recently evolved to include retrieval-augmented generation (RAG) systems. They allow searching and incorporating real-time data into large language models (LLMs) to provide a well-informed, attributed, concise summary in contrast to the traditional search paradigm that relies on displaying a ranked list of documents. Therefore, given these recent advancements, it is crucial to have an arena to build, test, visualize, and systematically evaluate RAG-based search systems. With this in mind, we propose the TREC 2024 RAG Track to foster innovation in evaluating RAG systems. In our work, we lay out the steps we've made towards making this track a reality -- we describe the details of our reusable framework, Ragnarök, explain the curation of the new MS MARCO V2.1 collection choice, release the development topics for the track, and standardize the I/O definitions which assist the end user. Next, using Ragnarök, we identify and provide key industrial baselines such as OpenAI's GPT-4o or Cohere's Command R+. Further, we introduce a web-based user interface for an interactive arena allowing benchmarking pairwise RAG systems by crowdsourcing. We open-source our Ragnarök framework and baselines to achieve a unified standard for future RAG systems.

Paper Structure

This paper contains 27 sections, 5 figures, 4 tables.

Figures (5)

  • Figure 1: Schematic diagram of the Ragnarök framework. Given a user topic (left), the process consists of two steps: (1) (R) retrieval (+ rerank), where the topic yields the top-$k$ relevant segments from our document collection (e.g., potty training articles); and (2) (AG) augmented-generation, where the retrieved segments with a suitable prompt template is fed to the large language model (LLM) to generate the post-processed answer response (JSON) containing individual sentence-level citations.
  • Figure 2: ChatQA prompt template liu:2024b used for RAG generation with in-text citations with GPT-4o in our Ragnarök framework.
  • Figure 3: WebUI showcasing the Ragnarök System Arena and the user query, "what inspired pink floyd's the wall?", with answers from two pipelines side-by-side comparing GPT-4o answer (left) and Command R+ answer (right).
  • Figure 4: WebUI (dark mode) showcasing the Ragnarök system arena for the user query on "why have used car prices increased" from TREC-2024 Researchy with two different blinded pipelines. The output tab displays the answers in human-readable form.
  • Figure 5: The responses tab for the example in \ref{['fig:ragnarok-battle-blind']}. Note that the responses tab reformats the final answers into the JSON format expected by the I/O definitions of the TREC 2024 RAG Track.