Table of Contents
Fetching ...

On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers

Omer Dahary, Benaya Koren, Daniel Garibi, Daniel Cohen-Or

Abstract

Modern Text-to-Image (T2I) diffusion models have achieved remarkable semantic alignment, yet they often suffer from a significant lack of variety, converging on a narrow set of visual solutions for any given prompt. This typicality bias presents a challenge for creative applications that require a wide range of generative outcomes. We identify a fundamental trade-off in current approaches to diversity: modifying model inputs requires costly optimization to incorporate feedback from the generative path. In contrast, acting on spatially-committed intermediate latents tends to disrupt the forming visual structure, leading to artifacts. In this work, we propose to apply repulsion in the Contextual Space as a novel framework for achieving rich diversity in Diffusion Transformers. By intervening in the multimodal attention channels, we apply on-the-fly repulsion during the transformer's forward pass, injecting the intervention between blocks where text conditioning is enriched with emergent image structure. This allows for redirecting the guidance trajectory after it is structurally informed but before the composition is fixed. Our results demonstrate that repulsion in the Contextual Space produces significantly richer diversity without sacrificing visual fidelity or semantic adherence. Furthermore, our method is uniquely efficient, imposing a small computational overhead while remaining effective even in modern "Turbo" and distilled models where traditional trajectory-based interventions typically fail.

On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers

Abstract

Modern Text-to-Image (T2I) diffusion models have achieved remarkable semantic alignment, yet they often suffer from a significant lack of variety, converging on a narrow set of visual solutions for any given prompt. This typicality bias presents a challenge for creative applications that require a wide range of generative outcomes. We identify a fundamental trade-off in current approaches to diversity: modifying model inputs requires costly optimization to incorporate feedback from the generative path. In contrast, acting on spatially-committed intermediate latents tends to disrupt the forming visual structure, leading to artifacts. In this work, we propose to apply repulsion in the Contextual Space as a novel framework for achieving rich diversity in Diffusion Transformers. By intervening in the multimodal attention channels, we apply on-the-fly repulsion during the transformer's forward pass, injecting the intervention between blocks where text conditioning is enriched with emergent image structure. This allows for redirecting the guidance trajectory after it is structurally informed but before the composition is fixed. Our results demonstrate that repulsion in the Contextual Space produces significantly richer diversity without sacrificing visual fidelity or semantic adherence. Furthermore, our method is uniquely efficient, imposing a small computational overhead while remaining effective even in modern "Turbo" and distilled models where traditional trajectory-based interventions typically fail.

Paper Structure

This paper contains 33 sections, 5 equations, 18 figures, 8 tables.

Figures (18)

  • Figure 1: Conceptual comparison of diversity strategies in dual-stream DiT architectures. Here $p^{(i)}$ denotes the prompt embedding for sample $i$, $z_t^{(i)}$ denotes the latent at timestep $t$ for sample $i$, and the red double-arrow icon indicates the point of diversity manipulation. (a) Upstream: Interventions on noise or prompt embeddings lack structural feedback from the emerging image. (b) Downstream: Repulsion in image latents acts on a fixed visual mode and can push samples off the data manifold, causing artifacts. (c) Ours: By applying on-the-fly repulsion within the Contextual Space (text-attention channels), we steer the model’s generative intent. This allows for a semantically driven intervention synchronized with the emergent visual structure.
  • Figure 2: Comparison of interpolation and extrapolation between the internal representations of two images. Intermediate frames are generated by denoising the source image while linearly blending its internal features with those of the target; extrapolation extends this vector beyond the endpoints. While Latent Space interpolation leads to structural blurring and artifacts due to spatial misalignment, the Contextual Space maintains high visual fidelity. This demonstrates that the Contextual Space enables smooth semantic transitions by decoupling generative intent from fixed spatial structures.
  • Figure 3: Qualitative results. For each prompt, we compare the base model results (top) to our results (bottom).
  • Figure 4: Integration with image editing models. We demonstrates that our method can be successfully integrated into Flux-Kontext to generate high-quality diverse results.
  • Figure 5: Quantitative evaluation. Pareto frontiers comparing our method against baseline methods using Flux-dev. We evaluate the trade-off between semantic diversity (Vendi Score) and three performance axes: (Left) Human Preference [ImageReward $\uparrow$], (Middle) Prompt Alignment [VQAScore $\uparrow$], and (Right) Distributional Fidelity [KID $\downarrow$]. Our method (red) achieves a superior frontier across all metrics.
  • ...and 13 more figures