Table of Contents
Fetching ...

Efficient Document Ranking with Learnable Late Interactions

Ziwei Ji, Himanshu Jain, Andreas Veit, Sashank J. Reddi, Sadeep Jayasumana, Ankit Singh Rawat, Aditya Krishna Menon, Felix Yu, Sanjiv Kumar

TL;DR

The paper tackles the efficiency gap in information retrieval between high-accuracy cross-encoders and low-latency dual-encoders by introducing LITE, a learnable late-interaction scorer. LITE processes the token-level similarity matrix $\mathbf{S}=\mathbf{Q}^\top\mathbf{D}$ with separable, shallow MLPs to produce a scalar score, and it is shown to be a universal approximator of continuous scoring functions in $\ell_2$ distance under practical embedding budgets, even with only two query and two document tokens feeding the final scorer. Empirically, LITE outperforms ColBERT on MS MARCO and Natural Questions in-domain re-ranking and BEIR zero-shot transfer, while enabling reductions in latency and storage (e.g., 0.25x storage with competitive accuracy). The combination of a theoretical universal approximation result and strong empirical gains demonstrates that learnable late-interaction scorers can provide a scalable, high-quality alternative to handcrafted reductions, with meaningful real-world impact for re-ranking pipelines.

Abstract

Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings; usually, the former has higher quality while the latter benefits from lower latency. Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden. In this paper, we propose novel learnable late-interaction models (LITE) that resolve these issues. Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25x storage compared to ColBERT.

Efficient Document Ranking with Learnable Late Interactions

TL;DR

The paper tackles the efficiency gap in information retrieval between high-accuracy cross-encoders and low-latency dual-encoders by introducing LITE, a learnable late-interaction scorer. LITE processes the token-level similarity matrix with separable, shallow MLPs to produce a scalar score, and it is shown to be a universal approximator of continuous scoring functions in distance under practical embedding budgets, even with only two query and two document tokens feeding the final scorer. Empirically, LITE outperforms ColBERT on MS MARCO and Natural Questions in-domain re-ranking and BEIR zero-shot transfer, while enabling reductions in latency and storage (e.g., 0.25x storage with competitive accuracy). The combination of a theoretical universal approximation result and strong empirical gains demonstrates that learnable late-interaction scorers can provide a scalable, high-quality alternative to handcrafted reductions, with meaningful real-world impact for re-ranking pipelines.

Abstract

Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings; usually, the former has higher quality while the latter benefits from lower latency. Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden. In this paper, we propose novel learnable late-interaction models (LITE) that resolve these issues. Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25x storage compared to ColBERT.

Paper Structure

This paper contains 35 sections, 9 theorems, 34 equations, 3 figures, 9 tables.

Key Result

Theorem 3.1

Let $s:\mathbb{R}^{(P\times L_1)\times(P\times L_2)}\to\mathbb{R}$ denote a continuous scoring function with a compact support $\Omega$ and $L_1,L_2\ge2$. For any $\mathcal{F}\in\{\mathcal{F}_{\rm f},\mathcal{F}_{\rm s}\}$ and any $\epsilon>0$, there exist a scorer $f\in\mathcal{F}$, and $T_1:\mathb

Figures (3)

  • Figure 1: Illustration of different query-document relevance models. (a) CE models compute a joint query-document embedding by passing the concatenated query/document tokens through a single Transformer. (b) In DE models, query and document embeddings are computed separately with their respective Transformers and the relevance score is the dot product of these embeddings. (c) In the proposed LITE method, query and document token embeddings are computed similarly to DE, but instead of a dot product, we first compute the similarity matrix between each pair of query and document tokens, and pass this matrix through an MLP to produce the final relevance score.
  • Figure 2: MS MARCO MRR with fewer document tokens.
  • Figure 3: MS MARCO MRR with reduced token dimension.

Theorems & Definitions (15)

  • Theorem 3.1: Universal approximation with LITE
  • Theorem 3.2: Limitation of DE with restricted embedding dimension
  • Theorem B.1: Universal approximation with LITE
  • Lemma B.2: Yun:2020 Lemma 5
  • Lemma B.3: Yun:2020 Lemma 6
  • Lemma B.4
  • proof
  • proof : Proof of \ref{['fact:lite_univ_approx_full']}, no positional encodings
  • Lemma C.1
  • Proposition C.2
  • ...and 5 more