Table of Contents
Fetching ...

Rethinking and Red-Teaming Protective Perturbation in Personalized Diffusion Models

Yixin Liu, Ruoxi Chen, Xun Chen, Lichao Sun

TL;DR

The paper addresses how minor protective perturbations enable shortcut learning in personalized diffusion models by inducing latent-space image–prompt misalignment in CLIP, creating a spurious link between a unique identifier $V^*$ and injected noise $ abla\,Delta$. It proposes a systematic red-teaming framework that combines efficient image restoration-based purification with Contrastive Decoupling Learning (CDL) using a dedicated noise token $V_N^*$ to decouple learning of the personalized concept from noise patterns, all grounded in a Structural Causal Model. The authors demonstrate superior effectiveness, efficiency, and faithfulness over existing purification methods across seven protections and show resilience to adaptive perturbations, while also highlighting a robustness–efficiency trade-off between purification backbones. This framework provides practical defenses for PDM protections and offers a generalizable approach to mitigating shortcut learning in other generative-security contexts.

Abstract

Personalized diffusion models (PDMs) have become prominent for adapting pre-trained text-to-image models to generate images of specific subjects using minimal training data. However, PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets. These vulnerabilities are exploited to create protective perturbations that prevent unauthorized image generation. Existing purification methods attempt to red-team the protective perturbation to break the protection but often over-purify images, resulting in information loss. In this work, we conduct an in-depth analysis of the fine-tuning process of PDMs through the lens of shortcut learning. We hypothesize and empirically demonstrate that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space. This misalignment causes the model to erroneously associate noisy patterns with unique identifiers during fine-tuning, resulting in poor generalization. Based on these insights, we propose a systematic red-teaming framework that includes data purification and contrastive decoupling learning. We first employ off-the-shelf image restoration techniques to realign images with their original semantic content in latent space. Then, we introduce contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns. Our study not only uncovers shortcut learning vulnerabilities in PDMs but also provides a thorough evaluation framework for developing stronger protection. Our extensive evaluation demonstrates its advantages over existing purification methods and its robustness against adaptive perturbations.

Rethinking and Red-Teaming Protective Perturbation in Personalized Diffusion Models

TL;DR

The paper addresses how minor protective perturbations enable shortcut learning in personalized diffusion models by inducing latent-space image–prompt misalignment in CLIP, creating a spurious link between a unique identifier and injected noise . It proposes a systematic red-teaming framework that combines efficient image restoration-based purification with Contrastive Decoupling Learning (CDL) using a dedicated noise token to decouple learning of the personalized concept from noise patterns, all grounded in a Structural Causal Model. The authors demonstrate superior effectiveness, efficiency, and faithfulness over existing purification methods across seven protections and show resilience to adaptive perturbations, while also highlighting a robustness–efficiency trade-off between purification backbones. This framework provides practical defenses for PDM protections and offers a generalizable approach to mitigating shortcut learning in other generative-security contexts.

Abstract

Personalized diffusion models (PDMs) have become prominent for adapting pre-trained text-to-image models to generate images of specific subjects using minimal training data. However, PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets. These vulnerabilities are exploited to create protective perturbations that prevent unauthorized image generation. Existing purification methods attempt to red-team the protective perturbation to break the protection but often over-purify images, resulting in information loss. In this work, we conduct an in-depth analysis of the fine-tuning process of PDMs through the lens of shortcut learning. We hypothesize and empirically demonstrate that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space. This misalignment causes the model to erroneously associate noisy patterns with unique identifiers during fine-tuning, resulting in poor generalization. Based on these insights, we propose a systematic red-teaming framework that includes data purification and contrastive decoupling learning. We first employ off-the-shelf image restoration techniques to realign images with their original semantic content in latent space. Then, we introduce contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns. Our study not only uncovers shortcut learning vulnerabilities in PDMs but also provides a thorough evaluation framework for developing stronger protection. Our extensive evaluation demonstrates its advantages over existing purification methods and its robustness against adaptive perturbations.

Paper Structure

This paper contains 20 sections, 6 equations, 7 figures, 7 tables, 1 algorithm.

Figures (7)

  • Figure 1: We observe that protective perturbation for personalized diffusion models creates a latent mismatch in the image-prompt pair. Fine-tuning on such perturbed data tricks the models, learning the wrong concept mapping. Thus, model generations suffer from severe degradation in quality.
  • Figure 2: Causal view of shortcut learning. (a) Protective perturbations create a spurious shortcut (red arrow) between the identifier $\mathcal{V}^*$ and noise $\Delta$. (b) Our CDL introduces a noise token $\mathcal{V}_N^*$ (orange arrows) to decouple noise from $\mathcal{V}^*$, enabling clean concept learning.
  • Figure 3: Latent 2D visualization and concept classification of images using CLIP encoders.
  • Figure 4: Visualization of purified images that were originally protected by MetaCloak. Our method shows high faithfulness and high quality, while others fail to effectively purify the perturbation.
  • Figure 5: LIQE quality score curve of identifier $\mathcal{V}*$ during training. Our proposed decoupled learning (CDL) approach significantly enhances the quality compared to the case with perturbations. When combined with input purification (CodeSR + CDL), the model achieves quality performance comparable to clean-level training.
  • ...and 2 more figures