Table of Contents
Fetching ...

Aligning Multimodal Sequential Recommendations via Robust Direct Preference Optimization with Sparse MoE

Hejin Huang, Jusheng Zhang, Kaitong Cai, Jian Wang, Rong Pan

Abstract

Preference-based alignment objectives have been widely adopted, from RLHF-style pairwise learning in large language models to emerging applications in recommender systems. Yet, existing work rarely examines how Direct Preference Optimization (DPO) behaves under implicit feedback, where unobserved items are not reliable negatives. We conduct systematic experiments on multimodal sequential recommendation to compare common negative-selection strategies and their interaction with DPO training. Our central finding is that a simple modification, replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool, consistently improves ranking performance. We attribute its effectiveness to two factors: (1) reducing erroneous suppressive gradients caused by false negatives, and (2) retaining informative hard signals while smoothing optimization via controlled stochasticity. With an optional sparse Mixture-of-Experts encoder for efficient capacity scaling, RoDPO achieves up to 5.25% NDCG@5 on three Amazon benchmarks, with nearly unchanged inference cost.

Aligning Multimodal Sequential Recommendations via Robust Direct Preference Optimization with Sparse MoE

Abstract

Preference-based alignment objectives have been widely adopted, from RLHF-style pairwise learning in large language models to emerging applications in recommender systems. Yet, existing work rarely examines how Direct Preference Optimization (DPO) behaves under implicit feedback, where unobserved items are not reliable negatives. We conduct systematic experiments on multimodal sequential recommendation to compare common negative-selection strategies and their interaction with DPO training. Our central finding is that a simple modification, replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool, consistently improves ranking performance. We attribute its effectiveness to two factors: (1) reducing erroneous suppressive gradients caused by false negatives, and (2) retaining informative hard signals while smoothing optimization via controlled stochasticity. With an optional sparse Mixture-of-Experts encoder for efficient capacity scaling, RoDPO achieves up to 5.25% NDCG@5 on three Amazon benchmarks, with nearly unchanged inference cost.

Paper Structure

This paper contains 36 sections, 7 equations, 5 figures, 5 tables, 1 algorithm.

Figures (5)

  • Figure 1: The "False Negative" Dilemma in RecSys DPO. Left: In NLP, negatives are strictly incorrect, making DPO effective. Middle: In RecSys, hard negatives may include unobserved positives, causing mis-penalization and embedding drift. Right: Top-$K$ sampling adds randomness among candidates, balancing positives.
  • Figure 2: Overall framework of RoDPO. (a) Multimodal Encoder: Item IDs and text/image features are embedded into a shared space, refined by a sparse MoE, and modeled by modality-specific Transformers with a temporal MoE for next-item prediction. (b) Robust DPO: Preference pairs are formed by the observed item $y_w$ and a sampled negative $y_l$ from a dynamic top-$K$ pool (excluding $y_w$), optimized against a frozen reference policy.
  • Figure 3: Effect of Top-$k$ Negative Sampling and DPO Coefficient $\beta$ on NDCG@5 and MRR@5.
  • Figure 4: Kernel Density Estimation (KDE) of predicted logit distributions for positive items ($y_w$) and hard negative items ($y_l$) on the Amazon Toys and Games dataset.
  • Figure 5: A qualitative case study on the Amazon Toys and Games dataset.