Table of Contents
Fetching ...

Aligning Diffusion Models with Noise-Conditioned Perception

Alexander Gambashidze, Anton Kulikov, Yuriy Sosnin, Ilya Makarov

TL;DR

This work tackles the inefficiency of aligning text-to-image diffusion models to human preferences when training in pixel or VAE spaces. It introduces Noise-Conditioned Perceptual Preference Optimization (NCPPO), which operates in the U-Net encoder embedding space and can be combined with Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT). It also demonstrates data refinement of the Pick-a-Pic dataset to reduce contradictions, achieving faster convergence and better image-preference metrics on Stable Diffusion 1.5 and SDXL. The results show significant improvements in training efficiency and perceptual image quality, making preference alignment more practical for consumer hardware and real-world deployment.

Abstract

Recent advancements in human preference optimization, initially developed for Language Models (LMs), have shown promise for text-to-image Diffusion Models, enhancing prompt alignment, visual appeal, and user preference. Unlike LMs, Diffusion Models typically optimize in pixel or VAE space, which does not align well with human perception, leading to slower and less efficient training during the preference alignment stage. We propose using a perceptual objective in the U-Net embedding space of the diffusion model to address these issues. Our approach involves fine-tuning Stable Diffusion 1.5 and XL using Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT) within this embedding space. This method significantly outperforms standard latent-space implementations across various metrics, including quality and computational cost. For SDXL, our approach provides 60.8\% general preference, 62.2\% visual appeal, and 52.1\% prompt following against original open-sourced SDXL-DPO on the PartiPrompts dataset, while significantly reducing compute. Our approach not only improves the efficiency and quality of human preference alignment for diffusion models but is also easily integrable with other optimization techniques. The training code and LoRA weights will be available here: https://huggingface.co/alexgambashidze/SDXL\_NCP-DPO\_v0.1

Aligning Diffusion Models with Noise-Conditioned Perception

TL;DR

This work tackles the inefficiency of aligning text-to-image diffusion models to human preferences when training in pixel or VAE spaces. It introduces Noise-Conditioned Perceptual Preference Optimization (NCPPO), which operates in the U-Net encoder embedding space and can be combined with Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT). It also demonstrates data refinement of the Pick-a-Pic dataset to reduce contradictions, achieving faster convergence and better image-preference metrics on Stable Diffusion 1.5 and SDXL. The results show significant improvements in training efficiency and perceptual image quality, making preference alignment more practical for consumer hardware and real-world deployment.

Abstract

Recent advancements in human preference optimization, initially developed for Language Models (LMs), have shown promise for text-to-image Diffusion Models, enhancing prompt alignment, visual appeal, and user preference. Unlike LMs, Diffusion Models typically optimize in pixel or VAE space, which does not align well with human perception, leading to slower and less efficient training during the preference alignment stage. We propose using a perceptual objective in the U-Net embedding space of the diffusion model to address these issues. Our approach involves fine-tuning Stable Diffusion 1.5 and XL using Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT) within this embedding space. This method significantly outperforms standard latent-space implementations across various metrics, including quality and computational cost. For SDXL, our approach provides 60.8\% general preference, 62.2\% visual appeal, and 52.1\% prompt following against original open-sourced SDXL-DPO on the PartiPrompts dataset, while significantly reducing compute. Our approach not only improves the efficiency and quality of human preference alignment for diffusion models but is also easily integrable with other optimization techniques. The training code and LoRA weights will be available here: https://huggingface.co/alexgambashidze/SDXL\_NCP-DPO\_v0.1

Paper Structure

This paper contains 15 sections, 18 equations, 5 figures, 2 tables.

Figures (5)

  • Figure 1: Noise-Conditioned Perceptual objective for aligning diffusion models significantly improves Direct Preference Optimization.
  • Figure 2: Our method adapts much better to human preferences compared to baseline latent / pixel implementations and can produce extremely high visual appeal alignment.
  • Figure 3: Overall NCPPO pipeline. We optimize preferences inside a Noise-Conditioned embedding space.
  • Figure 4: We evaluate training speed using PickScore on a Pick-a-Pic validation set. Our method significantly accelerates the learning process compared to baseline methods like DPO and Supervised Fine-Tuning while also achieving superior quality. Contrastive Preference Optimization is very unstable due to the lack of a reference model, but we demonstrate that our method provides a regularization effect as well.
  • Figure 5: Side-by-side real human preferences comparison for different SDXL models using PartiPrompts benchmark. We compare NCP-DPO with 1) Our own DPO-SDXL 2) Original published DPO-SDXL 3) Baseline model with no preference optimization. Our method significantly improves Direct Preference Optimization. All models are trained on the same data.