Table of Contents
Fetching ...

Language-Free Generative Editing from One Visual Example

Omar Elezabi, Eduard Zamfir, Zongwei Wu, Radu Timofte

Abstract

Text-guided diffusion models have advanced image editing by enabling intuitive control through language. However, despite their strong capabilities, we surprisingly find that SOTA methods struggle with simple, everyday transformations such as rain or blur. We attribute this limitation to weak and inconsistent textual supervision during training, which leads to poor alignment between language and vision. Existing solutions often rely on extra finetuning or stronger text conditioning, but suffer from high data and computational requirements. We argue that diffusion-based editing capabilities aren't lost but merely hidden from text. The door to cost-efficient visual editing remains open, and the key lies in a vision-centric paradigm that perceives and reasons about visual change as humans do, beyond words. Inspired by this, we introduce Visual Diffusion Conditioning (VDC), a training-free framework that learns conditioning signals directly from visual examples for precise, language-free image editing. Given a paired example -one image with and one without the target effect- VDC derives a visual condition that captures the transformation and steers generation through a novel condition-steering mechanism. An accompanying inversion-correction step mitigates reconstruction errors during DDIM inversion, preserving fine detail and realism. Across diverse tasks, VDC outperforms both training-free and fully fine-tuned text-based editing methods. The code and models are open-sourced at https://omaralezaby.github.io/vdc/

Language-Free Generative Editing from One Visual Example

Abstract

Text-guided diffusion models have advanced image editing by enabling intuitive control through language. However, despite their strong capabilities, we surprisingly find that SOTA methods struggle with simple, everyday transformations such as rain or blur. We attribute this limitation to weak and inconsistent textual supervision during training, which leads to poor alignment between language and vision. Existing solutions often rely on extra finetuning or stronger text conditioning, but suffer from high data and computational requirements. We argue that diffusion-based editing capabilities aren't lost but merely hidden from text. The door to cost-efficient visual editing remains open, and the key lies in a vision-centric paradigm that perceives and reasons about visual change as humans do, beyond words. Inspired by this, we introduce Visual Diffusion Conditioning (VDC), a training-free framework that learns conditioning signals directly from visual examples for precise, language-free image editing. Given a paired example -one image with and one without the target effect- VDC derives a visual condition that captures the transformation and steers generation through a novel condition-steering mechanism. An accompanying inversion-correction step mitigates reconstruction errors during DDIM inversion, preserving fine detail and realism. Across diverse tasks, VDC outperforms both training-free and fully fine-tuned text-based editing methods. The code and models are open-sourced at https://omaralezaby.github.io/vdc/

Paper Structure

This paper contains 28 sections, 6 equations, 20 figures, 9 tables, 2 algorithms.

Figures (20)

  • Figure 1: Text–image misalignment in diffusion latent space. Text-guided generative models rely on language, which often fails to capture appearance-level transformations, e.g. rain, leading to semantic but visually misaligned directions. Our method, Visual Diffusion Conditioning (VDC), instead learns a vision-centric conditioning signal directly from paired visual examples, uncovering the correct transformation direction within the latent space. By steering the diffusion process along this aligned path, VDC achieves faithful and realistic edits, bridging the gap between text semantics and visual representations.
  • Figure 2: Language-Vision misalignment. The internal representations of LDM rombach2022high fail to accurately capture the semantics of degradations such as “rain” or “haze”. Attention maps under text-based conditioning remain object-centric and do not correspond to degradation-specific visual attributes. Our VDC framework realigns attention focus toward true visual cues, recovering meaningful features that correspond to rain streaks and hazy regions.
  • Figure 3: Proposed VDC framework. (a) Given a real image, we first invert it through DDIM and apply the learned steering condition $C_t^s$ to guide sampling toward the desired visual feature (e.g., removing rain) while preserving content and quality. (b) A lightweight Condition Generator produces per-step steering embeddings from token indices, representing the target visual feature. These conditions modulate the diffusion outputs through weighted score blending, enabling training-free visual editing without textual prompts.
  • Figure 4: Visual comparison. Text- and example-based methods struggle with complex edits due to misalignment or degradation priors. Our one-shot VDC (shown results) yields clean results, with multi-shot and correction modules improving generalization and fidelity.
  • Figure 5: Number of visual examples. Increasing the number of examples improves results, especially for tasks with high variability such as colorization. The inversion correction module further enhances detail preservation and overall output quality.
  • ...and 15 more figures