Table of Contents
Fetching ...

Why Instruction-Based Unlearning Fails in Diffusion Models?

Zeliang Zhang, Rui Sun, Jiani Liu, Qi Wu, Chenliang Xu

Abstract

Instruction-based unlearning has proven effective for modifying the behavior of large language models at inference time, but whether this paradigm extends to other generative models remains unclear. In this work, we investigate instruction-based unlearning in diffusion-based image generation models and show, through controlled experiments across multiple concepts and prompt variants, that diffusion models systematically fail to suppress targeted concepts when guided solely by natural-language unlearning instructions. By analyzing both the CLIP text encoder and cross-attention dynamics during the denoising process, we find that unlearning instructions do not induce sustained reductions in attention to the targeted concept tokens, causing the targeted concept representations to persist throughout generation. These results reveal a fundamental limitation of prompt-level instruction in diffusion models and suggest that effective unlearning requires interventions beyond inference-time language control.

Why Instruction-Based Unlearning Fails in Diffusion Models?

Abstract

Instruction-based unlearning has proven effective for modifying the behavior of large language models at inference time, but whether this paradigm extends to other generative models remains unclear. In this work, we investigate instruction-based unlearning in diffusion-based image generation models and show, through controlled experiments across multiple concepts and prompt variants, that diffusion models systematically fail to suppress targeted concepts when guided solely by natural-language unlearning instructions. By analyzing both the CLIP text encoder and cross-attention dynamics during the denoising process, we find that unlearning instructions do not induce sustained reductions in attention to the targeted concept tokens, causing the targeted concept representations to persist throughout generation. These results reveal a fundamental limitation of prompt-level instruction in diffusion models and suggest that effective unlearning requires interventions beyond inference-time language control.

Paper Structure

This paper contains 10 sections, 3 equations, 15 figures.

Figures (15)

  • Figure 1: Motivating experiment evaluating instruction-based unlearning in diffusion models. More examples, including experiments on SD-XL and explicit use of unlearning instructions, can be found in \ref{['sec:appendix']}.
  • Figure 5: Explicit use of unlearning instruction lets the model generate the targeted concept.
  • Figure 6: Implicit use of unlearning instruction still pushes the model to generate the targeted concept.
  • Figure : (a) $\Delta$ similarity (unlearn $-$ baseline)
  • Figure : (a) $\Delta$ similarity (unlearn $-$ baseline)
  • ...and 10 more figures