Efficient Document Ranking with Learnable Late Interactions
Ziwei Ji, Himanshu Jain, Andreas Veit, Sashank J. Reddi, Sadeep Jayasumana, Ankit Singh Rawat, Aditya Krishna Menon, Felix Yu, Sanjiv Kumar
TL;DR
The paper tackles the efficiency gap in information retrieval between high-accuracy cross-encoders and low-latency dual-encoders by introducing LITE, a learnable late-interaction scorer. LITE processes the token-level similarity matrix $\mathbf{S}=\mathbf{Q}^\top\mathbf{D}$ with separable, shallow MLPs to produce a scalar score, and it is shown to be a universal approximator of continuous scoring functions in $\ell_2$ distance under practical embedding budgets, even with only two query and two document tokens feeding the final scorer. Empirically, LITE outperforms ColBERT on MS MARCO and Natural Questions in-domain re-ranking and BEIR zero-shot transfer, while enabling reductions in latency and storage (e.g., 0.25x storage with competitive accuracy). The combination of a theoretical universal approximation result and strong empirical gains demonstrates that learnable late-interaction scorers can provide a scalable, high-quality alternative to handcrafted reductions, with meaningful real-world impact for re-ranking pipelines.
Abstract
Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings; usually, the former has higher quality while the latter benefits from lower latency. Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden. In this paper, we propose novel learnable late-interaction models (LITE) that resolve these issues. Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25x storage compared to ColBERT.
