Table of Contents
Fetching ...

Gated Condition Injection without Multimodal Attention: Towards Controllable Linear-Attention Transformers

Yuhe Liu, Zhenxiong Tan, Yujia Hu, Songhua Liu, Xinchao Wang

Abstract

Recent advances in diffusion-based controllable visual generation have led to remarkable improvements in image quality. However, these powerful models are typically deployed on cloud servers due to their large computational demands, raising serious concerns about user data privacy. To enable secure and efficient on-device generation, we explore in this paper controllable diffusion models built upon linear attention architectures, which offer superior scalability and efficiency, even on edge devices. Yet, our experiments reveal that existing controllable generation frameworks, such as ControlNet and OminiControl, either lack the flexibility to support multiple heterogeneous condition types or suffer from slow convergence on such linear-attention models. To address these limitations, we propose a novel controllable diffusion framework tailored for linear attention backbones like SANA. The core of our method lies in a unified gated conditioning module working in a dual-path pipeline, which effectively integrates multi-type conditional inputs, such as spatially aligned and non-aligned cues. Extensive experiments on multiple tasks and benchmarks demonstrate that our approach achieves state-of-the-art controllable generation performance based on linear-attention models, surpassing existing methods in terms of fidelity and controllability.

Gated Condition Injection without Multimodal Attention: Towards Controllable Linear-Attention Transformers

Abstract

Recent advances in diffusion-based controllable visual generation have led to remarkable improvements in image quality. However, these powerful models are typically deployed on cloud servers due to their large computational demands, raising serious concerns about user data privacy. To enable secure and efficient on-device generation, we explore in this paper controllable diffusion models built upon linear attention architectures, which offer superior scalability and efficiency, even on edge devices. Yet, our experiments reveal that existing controllable generation frameworks, such as ControlNet and OminiControl, either lack the flexibility to support multiple heterogeneous condition types or suffer from slow convergence on such linear-attention models. To address these limitations, we propose a novel controllable diffusion framework tailored for linear attention backbones like SANA. The core of our method lies in a unified gated conditioning module working in a dual-path pipeline, which effectively integrates multi-type conditional inputs, such as spatially aligned and non-aligned cues. Extensive experiments on multiple tasks and benchmarks demonstrate that our approach achieves state-of-the-art controllable generation performance based on linear-attention models, surpassing existing methods in terms of fidelity and controllability.

Paper Structure

This paper contains 24 sections, 8 equations, 10 figures, 4 tables.

Figures (10)

  • Figure 1: Gated control on the original OminiControl. Our approach enables the model to capture control signals much earlier during training, even when using softmax attention.
  • Figure 2: Convergence behavior. The introduction of gated modulation results in a substantially steeper decline in training loss compared with interaction mechanisms that rely solely on attention, indicating that the model learns conditional information more rapidly and more effectively, ultimately achieving a lower loss. This trend is also reflected in the CLIP-Image scores, where w/ gated method consistently outperforms the baseline from the earliest training stage and maintains a clear lead throughout.
  • Figure 2: Robustness to sampling steps and guidance scale. Our model produces better and more stable outputs than OminiControl under both low-step inference and varying guidance scales.
  • Figure 3: Exploration of different methods and comparison of results. We adopt a shared module to handle the noisy latent and image condition, thus maximally retaining the original information flow of the model. The internal interaction enables flexible and general conditional control. Furthermore, a unified gating mechanism allows both non-spatial and spatial information to be effectively injected, improving performance while greatly accelerating the convergence of spatial tasks.
  • Figure 3: Further visual comparisons on the subject-driven tasks. Our approach preserves object-specific features with greater fidelity, while simultaneously adapting the environment according to the provided editing prompt in a natural and flexible manner.
  • ...and 5 more figures