Table of Contents
Fetching ...

Drift-AR: Single-Step Visual Autoregressive Generation via Anti-Symmetric Drifting

Zhen Zou, Xiaoxiao Ma, Mingde Yao, Jie Huang, LinJiang Huang, Feng Zhao

Abstract

Autoregressive (AR)-Diffusion hybrid paradigms combine AR's structured semantic modeling with diffusion's high-fidelity synthesis, yet suffer from a dual speed bottleneck: the sequential AR stage and the iterative multi-step denoising of the diffusion vision decode stage. Existing methods address each in isolation without a unified principle design. We observe that the per-position \emph{prediction entropy} of continuous-space AR models naturally encodes spatially varying generation uncertainty, which simultaneously governing draft prediction quality in the AR stage and reflecting the corrective effort required by vision decoding stage, which is not fully explored before. Since entropy is inherently tied to both bottlenecks, it serves as a natural unifying signal for joint acceleration. In this work, we propose \textbf{Drift-AR}, which leverages entropy signal to accelerate both stages: 1) for AR acceleration, we introduce Entropy-Informed Speculative Decoding that align draft--target entropy distributions via a causal-normalized entropy loss, resolving the entropy mismatch that causes excessive draft rejection; 2) for visual decoder acceleration, we reinterpret entropy as the \emph{physical variance} of the initial state for an anti-symmetric drifting field -- high-entropy positions activate stronger drift toward the data manifold while low-entropy positions yield vanishing drift -- enabling single-step (1-NFE) decoding without iterative denoising or distillation. Moreover, both stages share the same entropy signal, which is computed once with no extra cost. Experiments on MAR, TransDiff, and NextStep-1 demonstrate 3.8--5.5$\times$ speedup with genuine 1-NFE decoding, matching or surpassing original quality. Code will be available at https://github.com/aSleepyTree/Drift-AR.

Drift-AR: Single-Step Visual Autoregressive Generation via Anti-Symmetric Drifting

Abstract

Autoregressive (AR)-Diffusion hybrid paradigms combine AR's structured semantic modeling with diffusion's high-fidelity synthesis, yet suffer from a dual speed bottleneck: the sequential AR stage and the iterative multi-step denoising of the diffusion vision decode stage. Existing methods address each in isolation without a unified principle design. We observe that the per-position \emph{prediction entropy} of continuous-space AR models naturally encodes spatially varying generation uncertainty, which simultaneously governing draft prediction quality in the AR stage and reflecting the corrective effort required by vision decoding stage, which is not fully explored before. Since entropy is inherently tied to both bottlenecks, it serves as a natural unifying signal for joint acceleration. In this work, we propose \textbf{Drift-AR}, which leverages entropy signal to accelerate both stages: 1) for AR acceleration, we introduce Entropy-Informed Speculative Decoding that align draft--target entropy distributions via a causal-normalized entropy loss, resolving the entropy mismatch that causes excessive draft rejection; 2) for visual decoder acceleration, we reinterpret entropy as the \emph{physical variance} of the initial state for an anti-symmetric drifting field -- high-entropy positions activate stronger drift toward the data manifold while low-entropy positions yield vanishing drift -- enabling single-step (1-NFE) decoding without iterative denoising or distillation. Moreover, both stages share the same entropy signal, which is computed once with no extra cost. Experiments on MAR, TransDiff, and NextStep-1 demonstrate 3.8--5.5 speedup with genuine 1-NFE decoding, matching or surpassing original quality. Code will be available at https://github.com/aSleepyTree/Drift-AR.

Paper Structure

This paper contains 17 sections, 10 equations, 4 figures, 4 tables.

Figures (4)

  • Figure 1: Qualitative generation comparison between the vanilla NextStep-1 team2025nextstep and our method on GenEval ghosh2023geneval.
  • Figure 2: Entropy as a diagnostic signal for AR-Diffusion hybrids. (a) Vision AR entropy: the draft model (red) concentrates at low entropy while the target model (blue) spans higher values, revealing severe entropy mismatch. (b) Language AR entropy: large and small models overlap substantially, explaining why speculative decoding succeeds in LLMs, which cannot be directly applied for vision AR models. (c) Per-position AR prediction error $\|z_{AR}^{(r)}{-}x_{gt}^{(r)}\|$ vs. entropy $\mathcal{E}^{(r)}$ (Pearson $r{=}0.64$): higher entropy correlates with larger prediction error, confirming that entropy encodes local generation difficulty. (d) Binned analysis: mean AR error increases monotonically with entropy, motivating entropy-parameterized variance for the drifting decoder.
  • Figure 3: Illustration of the proposed Drift-AR framework. (Left) Entropy-informed speculative decoding alleviates entropy mismatch between draft AR model and target AR model, which provides entropy-aligned semantic guidance that drives the draft AR model learns diverse, uncertainty-aware feature predictions rather than collapsing to overconfident modes. (Right) The visual decoder learns an anti-symmetric drifting field $V_\theta$ guided by Entropy-Parameterized Prior over the pushforward distribution $q$; the training procedure evolves $q$ toward the data distribution $p$, thus when equilibrium the drift vanishes and single-step (1-NFE) generation is achieved. The overall Drift-AR framework is end-to-end training, which couples entropy-guided AR with drifting-field vision decoder optimization.
  • Figure 4: Visual comparisons with NextStep-1 team2025nextstep on MJHQ-30K li2024playground.