Table of Contents
Fetching ...

TRU: Targeted Reverse Update for Efficient Multimodal Recommendation Unlearning

Zhanting Zhou, KaHou Tam, Ziqiang Zheng, Zeyu Ma, Zhanting Zhou

Abstract

Multimodal recommendation systems (MRS) jointly model user-item interaction graphs and rich item content, but this tight coupling makes user data difficult to remove once learned. Approximate machine unlearning offers an efficient alternative to full retraining, yet existing methods for MRS mainly rely on a largely uniform reverse update across the model. We show that this assumption is fundamentally mismatched to modern MRS: deleted-data influence is not uniformly distributed, but concentrated unevenly across \textit{ranking behavior}, \textit{modality branches}, and \textit{network layers}. This non-uniformity gives rise to three bottlenecks in MRS unlearning: target-item persistence in the collaborative graph, modality imbalance across feature branches, and layer-wise sensitivity in the parameter space. To address this mismatch, we propose \textbf{targeted reverse update} (TRU), a plug-and-play unlearning framework for MRS. Instead of applying a blind global reversal, TRU performs three coordinated interventions across the model hierarchy: a ranking fusion gate to suppress residual target-item influence in ranking, branch-wise modality scaling to preserve retained multimodal representations, and capacity-aware layer isolation to localize reverse updates to deletion-sensitive modules. Experiments across two representative backbones, three datasets, and three unlearning regimes show that TRU consistently achieves a better retain-forget trade-off than prior approximate baselines, while security audits further confirm deeper forgetting and behavior closer to a full retraining on the retained data.

TRU: Targeted Reverse Update for Efficient Multimodal Recommendation Unlearning

Abstract

Multimodal recommendation systems (MRS) jointly model user-item interaction graphs and rich item content, but this tight coupling makes user data difficult to remove once learned. Approximate machine unlearning offers an efficient alternative to full retraining, yet existing methods for MRS mainly rely on a largely uniform reverse update across the model. We show that this assumption is fundamentally mismatched to modern MRS: deleted-data influence is not uniformly distributed, but concentrated unevenly across \textit{ranking behavior}, \textit{modality branches}, and \textit{network layers}. This non-uniformity gives rise to three bottlenecks in MRS unlearning: target-item persistence in the collaborative graph, modality imbalance across feature branches, and layer-wise sensitivity in the parameter space. To address this mismatch, we propose \textbf{targeted reverse update} (TRU), a plug-and-play unlearning framework for MRS. Instead of applying a blind global reversal, TRU performs three coordinated interventions across the model hierarchy: a ranking fusion gate to suppress residual target-item influence in ranking, branch-wise modality scaling to preserve retained multimodal representations, and capacity-aware layer isolation to localize reverse updates to deletion-sensitive modules. Experiments across two representative backbones, three datasets, and three unlearning regimes show that TRU consistently achieves a better retain-forget trade-off than prior approximate baselines, while security audits further confirm deeper forgetting and behavior closer to a full retraining on the retained data.

Paper Structure

This paper contains 33 sections, 7 equations, 9 figures, 5 tables, 1 algorithm.

Figures (9)

  • Figure 1: Conceptual overview of MRS unlearning. Left: a deletion request triggers unlearning, but verification for the unlearned model fails. Right: Ignoring the item-centric structure leads to these unlearning failures.
  • Figure 2: Overview of TRU. We diagnose three failure modes of uniform reverse unlearning in MRS: target-item effects persistence (Section \ref{['sec:ranking']}), weak item-modality fusion (Section \ref{['sec:modality']}), and layer sensitivity (Section \ref{['sec:layer']}). We map them to Ranking Gate, Branch-wise Scaling, and Layer Selection in a unified reverse update (Section \ref{['sec:tru_unified']}).
  • Figure 3: Item persistence on Amazon-Clothing. Left: the forget set is much sparser than the retain set in item popularity. Right: even retraining leaves non-zero Top-20 exposure for target items.
  • Figure 4: Item modality imbalance. Lower-left / upper-right: cross-modal alignment in MGCN / MIG-GT. Off-diagonal similarities remain weak in both backbones.
  • Figure 5: Layer sensitivity mismatch. MMRecUn over-shifts early item embedding modules relative to retraining, while TRU stays closer to the retraining profile.
  • ...and 4 more figures