Varying Manifolds in Diffusion: From Time-varying Geometries to Visual Saliency
Junhao Chen, Manyi Li, Zherong Pan, Xifeng Gao, Changhe Tu
TL;DR
This work analyzes diffusion models through the lens of time-varying data manifolds $\{M_t\}$ and introduces generation rate $r_t(X_t,v)$ and generation curve $c(X_T,v)$ to quantify local geometric deformation around image components. By deriving a differentiable, efficient estimator based on the latent mapper $h_t$, the authors enable curve shaping and matching to perform a unified set of image editing tasks, including semantic transfer, object removal, saliency manipulation, and image blending, often outperforming state-of-the-art baselines. They show a strong correlation between generation-curve fluctuations and visual saliency, justifying curve-guided editing and enabling targeted manipulations with a single unconditional diffusion model. Practical limitations include first-order differentiation requirements and variable convergence speeds across objects, impacting runtime (roughly 10 minutes for 300 iterations on a single GPU) and editing fidelity in complex scenes.
Abstract
Deep generative models learn the data distribution, which is concentrated on a low-dimensional manifold. The geometric analysis of distribution transformation provides a better understanding of data structure and enables a variety of applications. In this paper, we study the geometric properties of the diffusion model, whose forward diffusion process and reverse generation process construct a series of distributions on manifolds which vary over time. Our key contribution is the introduction of generation rate, which corresponds to the local deformation of manifold over time around an image component. We show that the generation rate is highly correlated with intuitive visual properties, such as visual saliency, of the image component. Further, we propose an efficient and differentiable scheme to estimate the generation rate for a given image component over time, giving rise to a generation curve. The differentiable nature of our scheme allows us to control the shape of the generation curve via optimization. Using different loss functions, our generation curve matching algorithm provides a unified framework for a range of image manipulation tasks, including semantic transfer, object removal, saliency manipulation, image blending, etc. We conduct comprehensive analytical evaluations to support our findings and evaluate our framework on various manipulation tasks. The results show that our method consistently leads to better manipulation results, compared to recent baselines.
