Table of Contents
Fetching ...

Generating DDPM-based Samples from Tilted Distributions

Himadri Mandal, Dhruman Gupta, Rushil Gupta, Sarvesh Ravichandran Iyer, Agniv Bandyopadhyay, Achal Bassamboo, Varun Gupta, Sandeep Juneja

Abstract

Given $n$ independent samples from a $d$-dimensional probability distribution, our aim is to generate diffusion-based samples from a distribution obtained by tilting the original, where the degree of tilt is parametrized by $θ\in \mathbb{R}^d$. We define a plug-in estimator and show that it is minimax-optimal. We develop Wasserstein bounds between the distribution of the plug-in estimator and the true distribution as a function of $n$ and $θ$, illustrating regimes where the output and the desired true distribution are close. Further, under some assumptions, we prove the TV-accuracy of running Diffusion on these tilted samples. Our theoretical results are supported by extensive simulations. Applications of our work include finance, weather and climate modelling, and many other domains, where the aim may be to generate samples from a tilted distribution that satisfies practically motivated moment constraints.

Generating DDPM-based Samples from Tilted Distributions

Abstract

Given independent samples from a -dimensional probability distribution, our aim is to generate diffusion-based samples from a distribution obtained by tilting the original, where the degree of tilt is parametrized by . We define a plug-in estimator and show that it is minimax-optimal. We develop Wasserstein bounds between the distribution of the plug-in estimator and the true distribution as a function of and , illustrating regimes where the output and the desired true distribution are close. Further, under some assumptions, we prove the TV-accuracy of running Diffusion on these tilted samples. Our theoretical results are supported by extensive simulations. Applications of our work include finance, weather and climate modelling, and many other domains, where the aim may be to generate samples from a tilted distribution that satisfies practically motivated moment constraints.

Paper Structure

This paper contains 27 sections, 21 theorems, 108 equations, 4 figures, 1 algorithm.

Key Result

Proposition 1

Let $\Phi$ be a random variable on a measurable space $(\Omega, \mathcal{F})$. Fix a probability measure $Q$ such that $\mathbb{E}_{Q}[\exp(\Phi)]< \infty$, and for every probability measure $P$ such that $P \ll Q$ and $\mathbb{E}_P[\Phi]<\infty$, consider the functional where $D_{\mathrm{KL}}$ denotes the $\mathrm{KL}$ divergence. Then, $\mathcal{J}_{\mathrm{KL}}$ is maximized uniquely at $P^\st

Figures (4)

  • Figure 1: The left plot shows the empirical sliced Wasserstein distance ($\mathcal{W}_2$) between the reweighed estimator and the true tilted distribution. The right plot shows the theoretical bound derived in Theorem \ref{['thm:WassWtd']} as a function of sample size $N$. As the theoretical bound vanishes, the empirical error decreases correspondingly.
  • Figure 2: Samples generated by twisting a bounded in 50 dimensions by $\vartheta = \theta \cdot (1, \ldots, 1)$, for $\theta$ = 1.0, 2.0, 2.5 using reweighed sampling, weighted diffusion, DPS, and LGD-MC. We see that our method performs as well as the empirical samples, outperforming the guidance methods.
  • Figure 3: DDPM samples from $P$. Daily temperature fields (May–June, India, $5^\circ\!\times\!5^\circ$).
  • Figure 4: DDPM samples from $P_\theta$. Reweighted training targets the hotter, rarer slice with $\mathbb{E}_{P_\theta}[g]=\mathbb{E}_P[g]+1$.

Theorems & Definitions (32)

  • Proposition 1: KL $\Rightarrow$ exponential tilt
  • Theorem 1: Asymptotic Minimaxity
  • Proposition 2: fournier2013rateconvergencewassersteindistance applied to $\mu_\theta$
  • Theorem 2
  • Theorem 3
  • Lemma 1
  • Corollary 1
  • Proposition 3
  • Corollary 2
  • Corollary 3
  • ...and 22 more