Table of Contents
Fetching ...

AlignIT: Enhancing Prompt Alignment in Customization of Text-to-Image Models

Aishwarya Agarwal, Srikrishna Karanam, Balaji Vasan Srinivasan

TL;DR

AlignIT addresses misalignment in customized text-to-image diffusion by focusing on cross-attention keys and values. It introduces a training-free, test-time adaptation that copies concept-specific $K$ and $V$ from a dummy prompt into the original prompt's positions, preserving other tokens, and can be applied on top of existing customization methods. Through extensive evaluation on CustomConcept101, AlignIT improves CLIP-based text and image alignment and garners user preference over baselines, validating its practical impact for fine-grained prompt control. The approach leverages the fact that $K$ and $V$ encode the conceptual content and can be manipulated to maintain prompt fidelity while faithfully rendering the customized concept.

Abstract

We consider the problem of customizing text-to-image diffusion models with user-supplied reference images. Given new prompts, the existing methods can capture the key concept from the reference images but fail to align the generated image with the prompt. In this work, we seek to address this key issue by proposing new methods that can easily be used in conjunction with existing customization methods that optimize the embeddings/weights at various intermediate stages of the text encoding process. The first contribution of this paper is a dissection of the various stages of the text encoding process leading up to the conditioning vector for text-to-image models. We take a holistic view of existing customization methods and notice that key and value outputs from this process differs substantially from their corresponding baseline (non-customized) models (e.g., baseline stable diffusion). While this difference does not impact the concept being customized, it leads to other parts of the generated image not being aligned with the prompt. Further, we also observe that these keys and values allow independent control various aspects of the final generation, enabling semantic manipulation of the output. Taken together, the features spanning these keys and values, serve as the basis for our next contribution where we fix the aforementioned issues with existing methods. We propose a new post-processing algorithm, AlignIT, that infuses the keys and values for the concept of interest while ensuring the keys and values for all other tokens in the input prompt are unchanged. Our proposed method can be plugged in directly to existing customization methods, leading to a substantial performance improvement in the alignment of the final result with the input prompt while retaining the customization quality.

AlignIT: Enhancing Prompt Alignment in Customization of Text-to-Image Models

TL;DR

AlignIT addresses misalignment in customized text-to-image diffusion by focusing on cross-attention keys and values. It introduces a training-free, test-time adaptation that copies concept-specific and from a dummy prompt into the original prompt's positions, preserving other tokens, and can be applied on top of existing customization methods. Through extensive evaluation on CustomConcept101, AlignIT improves CLIP-based text and image alignment and garners user preference over baselines, validating its practical impact for fine-grained prompt control. The approach leverages the fact that and encode the conceptual content and can be manipulated to maintain prompt fidelity while faithfully rendering the customized concept.

Abstract

We consider the problem of customizing text-to-image diffusion models with user-supplied reference images. Given new prompts, the existing methods can capture the key concept from the reference images but fail to align the generated image with the prompt. In this work, we seek to address this key issue by proposing new methods that can easily be used in conjunction with existing customization methods that optimize the embeddings/weights at various intermediate stages of the text encoding process. The first contribution of this paper is a dissection of the various stages of the text encoding process leading up to the conditioning vector for text-to-image models. We take a holistic view of existing customization methods and notice that key and value outputs from this process differs substantially from their corresponding baseline (non-customized) models (e.g., baseline stable diffusion). While this difference does not impact the concept being customized, it leads to other parts of the generated image not being aligned with the prompt. Further, we also observe that these keys and values allow independent control various aspects of the final generation, enabling semantic manipulation of the output. Taken together, the features spanning these keys and values, serve as the basis for our next contribution where we fix the aforementioned issues with existing methods. We propose a new post-processing algorithm, AlignIT, that infuses the keys and values for the concept of interest while ensuring the keys and values for all other tokens in the input prompt are unchanged. Our proposed method can be plugged in directly to existing customization methods, leading to a substantial performance improvement in the alignment of the final result with the input prompt while retaining the customization quality.

Paper Structure

This paper contains 8 sections, 8 figures, 2 tables, 1 algorithm.

Figures (8)

  • Figure 1: Editability-reconstruction tradeoff in baselines.
  • Figure 2: Control enabled by keys and values in cross-attention layers
  • Figure 3: Various stages of the text encoding process.
  • Figure 4: Cross-attention maps to demonstrate that baselines undesirably impact keys/values of tokens other than the concept of interest too.
  • Figure 5: Semantic manipulations offered by keys and values.
  • ...and 3 more figures