Table of Contents
Fetching ...

AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation

Yanan Sun, Yanchen Liu, Yinhao Tang, Wenjie Pei, Kai Chen

TL;DR

AnyControl tackles the problem of controllable text-to-image generation with arbitrary combinations of spatial controls. It introduces a Multi-Control Encoder that progressively fuses and aligns textual and visual signals via alternating fusion and alignment blocks guided by query tokens to produce a unified embedding. The approach achieves state-of-the-art performance on multi-control synthesis (COCO-UM) and improves single-control tasks, demonstrating better FID, CLIP alignment, and spatial-textual coherence. The framework leverages pre-trained CLIP and visual encoders for plug-in versatility and enables seamless integration with style and color controls, offering practical impact for versatile T2I generation.

Abstract

The field of text-to-image (T2I) generation has made significant progress in recent years, largely driven by advancements in diffusion models. Linguistic control enables effective content creation, but struggles with fine-grained control over image generation. This challenge has been explored, to a great extent, by incorporating additional user-supplied spatial conditions, such as depth maps and edge maps, into pre-trained T2I models through extra encoding. However, multi-control image synthesis still faces several challenges. Specifically, current approaches are limited in handling free combinations of diverse input control signals, overlook the complex relationships among multiple spatial conditions, and often fail to maintain semantic alignment with provided textual prompts. This can lead to suboptimal user experiences. To address these challenges, we propose AnyControl, a multi-control image synthesis framework that supports arbitrary combinations of diverse control signals. AnyControl develops a novel Multi-Control Encoder that extracts a unified multi-modal embedding to guide the generation process. This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals, as demonstrated by extensive quantitative and qualitative evaluations. Our project page is available in https://any-control.github.io.

AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation

TL;DR

AnyControl tackles the problem of controllable text-to-image generation with arbitrary combinations of spatial controls. It introduces a Multi-Control Encoder that progressively fuses and aligns textual and visual signals via alternating fusion and alignment blocks guided by query tokens to produce a unified embedding. The approach achieves state-of-the-art performance on multi-control synthesis (COCO-UM) and improves single-control tasks, demonstrating better FID, CLIP alignment, and spatial-textual coherence. The framework leverages pre-trained CLIP and visual encoders for plug-in versatility and enables seamless integration with style and color controls, offering practical impact for versatile T2I generation.

Abstract

The field of text-to-image (T2I) generation has made significant progress in recent years, largely driven by advancements in diffusion models. Linguistic control enables effective content creation, but struggles with fine-grained control over image generation. This challenge has been explored, to a great extent, by incorporating additional user-supplied spatial conditions, such as depth maps and edge maps, into pre-trained T2I models through extra encoding. However, multi-control image synthesis still faces several challenges. Specifically, current approaches are limited in handling free combinations of diverse input control signals, overlook the complex relationships among multiple spatial conditions, and often fail to maintain semantic alignment with provided textual prompts. This can lead to suboptimal user experiences. To address these challenges, we propose AnyControl, a multi-control image synthesis framework that supports arbitrary combinations of diverse control signals. AnyControl develops a novel Multi-Control Encoder that extracts a unified multi-modal embedding to guide the generation process. This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals, as demonstrated by extensive quantitative and qualitative evaluations. Our project page is available in https://any-control.github.io.

Paper Structure

This paper contains 19 sections, 3 equations, 17 figures, 4 tables.

Figures (17)

  • Figure 1: Multi-control image synthesis of AnyControl. Our model supports free combinations of multiple control signals and generates harmonious results that are well-aligned with each input. The input control signals fed into the model are shown in a combined image for better visualization.
  • Figure 2: AnyControl and Multi-Control Encoder.Left shows the overall framework of our AnyControl, which develops a Multi-Control Encoder for extracting comprehensive multi-control embeddings based on the textual prompts and multiple spatial conditions. The multi-control embeddings are then utilized to guide the generation process. Right shows the detailed design of our Multi-Control Encoder driven by alternating multi-control fusion and alignment blocks, with query tokens defined to distill the compatible information from textual tokens and visual tokens of the spatial conditions.
  • Figure 3: Three types of multi-control methods. Square in different color denotes different condition type while dotted square denotes zero tensor. (a) Some methods uni-controlnetCocktail adopt fixed-length channels of the input convolution layer followed by several convolution blocks to serve as the Multi-Control Encoder. (b) Other methods unicontrolControlNetT2I-Adapter utilize the MoE design, that is, construct separate encoder for each type of control signal and then obtain the embeddings through weighted sum. (c) Different from them, AnyControl adopts attention mechanism to accommodate varying-number and varying-modality of conditions. "SAB" and "CAB" denotes self- and cross-attention block, respectively.
  • Figure 4: Visualization of aligned and unaligned conditions. The first row shows the aligned case where pixels at the same location of all the control signals describe the same object. Conditions in the second and third rows describe the foreground and background respectively, contributing to a complete image together, constructing the unaligned case.
  • Figure 5: Comparison on multi-control image synthesis. Multi-ControlNet adopts MoE design to process diverse conditions while Cocktail adopts the composition design by combining multiple conditions of the same type into one.
  • ...and 12 more figures