Table of Contents
Fetching ...

FGR-ColBERT: Identifying Fine-Grained Relevance Tokens During Retrieval

Antonín Jarolím, Martin Fajčík

Abstract

Document retrieval identifies relevant documents but does not provide fine-grained evidence cues, such as specific relevant spans. A possible solution is to apply an LLM after retrieval; however, this introduces significant computational overhead and limits practical deployment. We propose FGR-ColBERT, a modification of ColBERT retrieval model that integrates fine-grained relevance signals distilled from an LLM directly into the retrieval function. Experiments on MS MARCO show that FGR-ColBERT (110M) achieves a token-level F1 of 64.5, exceeding the 62.8 of Gemma 2 (27B), despite being approximately 245 times smaller. At the same time, it preserves retrieval effectiveness (99% relative Recall@50) and remains efficient, incurring only a ~1.12x latency overhead compared to the original ColBERT.

FGR-ColBERT: Identifying Fine-Grained Relevance Tokens During Retrieval

Abstract

Document retrieval identifies relevant documents but does not provide fine-grained evidence cues, such as specific relevant spans. A possible solution is to apply an LLM after retrieval; however, this introduces significant computational overhead and limits practical deployment. We propose FGR-ColBERT, a modification of ColBERT retrieval model that integrates fine-grained relevance signals distilled from an LLM directly into the retrieval function. Experiments on MS MARCO show that FGR-ColBERT (110M) achieves a token-level F1 of 64.5, exceeding the 62.8 of Gemma 2 (27B), despite being approximately 245 times smaller. At the same time, it preserves retrieval effectiveness (99% relative Recall@50) and remains efficient, incurring only a ~1.12x latency overhead compared to the original ColBERT.

Paper Structure

This paper contains 9 sections, 5 equations, 2 figures, 3 tables.

Figures (2)

  • Figure 1: (a) ColBERT retrieval followed by LLM span extraction vs. our approach with integrated LLM knowledge transfer. (b) Newly proposed late interaction with an added token-level relevance scoring, preserving document-level relevance.
  • Figure 2: Three positive passage-query pairs and corresponding token-level scores (highlighted; darker is higher) derived from the FGR-ColBERT model and fine-grained relevance cues provided by Gemma 2 (bold).