Table of Contents
Fetching ...

Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

Yuxing Lu, Xukai Zhao, Wei Wu, Jinzhuo Wang

Abstract

The knowledge base in a retrieval-augmented generation (RAG) system is typically assembled once and never revised, even though the facts a query requires are often fragmented across documents and buried in irrelevant content. We argue that the knowledge base should be treated as a trainable component and propose WriteBack-RAG, a framework that uses labeled examples to identify where retrieval succeeds, isolate the relevant documents, and distill them into compact knowledge units that are indexed alongside the original corpus. Because the method modifies only the corpus, it can be applied once as an offline preprocessing step and combined with any RAG pipeline. Across four RAG methods, six benchmarks, and two LLM backbones, WriteBack-RAG improves every evaluated setting, with gains averaging +2.14%. Cross-method transfer experiments further show that the distilled knowledge benefits RAG pipelines other than the one used to produce it, confirming that the improvement resides in the corpus itself.

Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

Abstract

The knowledge base in a retrieval-augmented generation (RAG) system is typically assembled once and never revised, even though the facts a query requires are often fragmented across documents and buried in irrelevant content. We argue that the knowledge base should be treated as a trainable component and propose WriteBack-RAG, a framework that uses labeled examples to identify where retrieval succeeds, isolate the relevant documents, and distill them into compact knowledge units that are indexed alongside the original corpus. Because the method modifies only the corpus, it can be applied once as an offline preprocessing step and combined with any RAG pipeline. Across four RAG methods, six benchmarks, and two LLM backbones, WriteBack-RAG improves every evaluated setting, with gains averaging +2.14%. Cross-method transfer experiments further show that the distilled knowledge benefits RAG pipelines other than the one used to produce it, confirming that the improvement resides in the corpus itself.

Paper Structure

This paper contains 34 sections, 12 equations, 5 figures, 6 tables, 1 algorithm.

Figures (5)

  • Figure 1: Standard RAG retrieves fragmented evidence from raw documents. WriteBack-RAG distills useful evidence into compact write-back documents that improve future retrieval and generation.
  • Figure 2: The WriteBack-RAG pipeline. During training (top), a two-stage gating mechanism identifies examples where retrieval helps and selects contributing documents. An LLM distiller fuses the selected evidence into a compact knowledge unit, which is indexed into a separate write-back corpus. During testing (bottom), the retriever searches combined knowledge source with no changes to the retriever or generator.
  • Figure 3: Retrieval-rank distribution of retained documents. Each panel shows the fraction of retained documents among the retrieved documents.
  • Figure 4: Cross-writeback robustness. Same-WB uses write-back knowledge from the same RAG method, while Cross-WB uses write-back knowledge from the other method. Numbers above the bars denote absolute gains over the No-WB baseline.
  • Figure 5: Source evidence length versus distilled write-back knowledge length for six benchmarks.