A Thorough Performance Benchmarking on Lightweight Embedding-based Recommender Systems
Hung Vinh Tran, Tong Chen, Quoc Viet Hung Nguyen, Zi Huang, Lizhen Cui, Hongzhi Yin
TL;DR
This paper addresses the lack of standardized benchmarks for lightweight embedding-based recommender systems (LERSs) by conducting a comprehensive cross-task evaluation across collaborative filtering and content-based tasks, using two CF and two CB datasets, and examining three compression goals. It benchmarks a diverse set of LERS approaches (compositional, pruning, NAS-based, and hybrids) alongside a simple magnitude-pruning baseline (MagPrune), tuned via Tree-structured Parzen Estimation and evaluated on GPU and edge devices. Key findings show that performance gains depend on task and dataset; simple pruning (e.g., PEP, MagPrune) can rival more complex methods, while cross-task transferability varies across methods. The work provides practical guidance for model selection, highlights the real-world efficiency tradeoffs, and releases open-source code to facilitate reproducibility and future research in LERSs for edge-enabled recommendations.
Abstract
Since the creation of the Web, recommender systems (RSs) have been an indispensable mechanism in information filtering. State-of-the-art RSs primarily depend on categorical features, which ecoded by embedding vectors, resulting in excessively large embedding tables. To prevent over-parameterized embedding tables from harming scalability, both academia and industry have seen increasing efforts in compressing RS embeddings. However, despite the prosperity of lightweight embedding-based RSs (LERSs), a wide diversity is seen in evaluation protocols, resulting in obstacles when relating LERS performance to real-world usability. Moreover, despite the common goal of lightweight embeddings, LERSs are evaluated with a single choice between the two main recommendation tasks -- collaborative filtering and content-based recommendation. This lack of discussions on cross-task transferability hinders the development of unified, more scalable solutions. Motivated by these issues, this study investigates various LERSs' performance, efficiency, and cross-task transferability via a thorough benchmarking process. Additionally, we propose an efficient embedding compression method using magnitude pruning, which is an easy-to-deploy yet highly competitive baseline that outperforms various complex LERSs. Our study reveals the distinct performance of LERSs across the two tasks, shedding light on their effectiveness and generalizability. To support edge-based recommendations, we tested all LERSs on a Raspberry Pi 4, where the efficiency bottleneck is exposed. Finally, we conclude this paper with critical summaries of LERS performance, model selection suggestions, and underexplored challenges around LERSs for future research. To encourage future research, we publish source codes and artifacts at \href{this link}{https://github.com/chenxing1999/recsys-benchmark}.
