Table of Contents
Fetching ...

Reflected diffusion models adapt to low-dimensional data

Asbjørn Holk, Claudia Strauch, Lukas Trottner

Abstract

While the mathematical foundations of score-based generative models are increasingly well understood for unconstrained Euclidean spaces, many practical applications involve data restricted to bounded domains. This paper provides a statistical analysis of reflected diffusion models on the hypercube $[0,1]^D$ for target distributions supported on $d$-dimensional linear subspaces. A primary challenge in this setting is the absence of Gaussian transition kernels, which play a central role in standard theory in $\mathbb{R}^D$. By employing an easily implementable infinite series expansion of the transition densities, we develop analytic tools to bound the score function and its approximation by sparse ReLU networks. For target densities with Sobolev smoothness $α$, we establish a convergence rate in the $1$-Wasserstein distance of order $n^{-\frac{α+1-δ}{2α+d}}$ for arbitrarily small $δ> 0$, demonstrating that the generative algorithm fully adapts to the intrinsic dimension $d$. These results confirm that the presence of reflecting boundaries does not degrade the fundamental statistical efficiency of the diffusion paradigm, matching the almost optimal rates known for unconstrained settings.

Reflected diffusion models adapt to low-dimensional data

Abstract

While the mathematical foundations of score-based generative models are increasingly well understood for unconstrained Euclidean spaces, many practical applications involve data restricted to bounded domains. This paper provides a statistical analysis of reflected diffusion models on the hypercube for target distributions supported on -dimensional linear subspaces. A primary challenge in this setting is the absence of Gaussian transition kernels, which play a central role in standard theory in . By employing an easily implementable infinite series expansion of the transition densities, we develop analytic tools to bound the score function and its approximation by sparse ReLU networks. For target densities with Sobolev smoothness , we establish a convergence rate in the -Wasserstein distance of order for arbitrarily small , demonstrating that the generative algorithm fully adapts to the intrinsic dimension . These results confirm that the presence of reflecting boundaries does not degrade the fundamental statistical efficiency of the diffusion paradigm, matching the almost optimal rates known for unconstrained settings.

Paper Structure

This paper contains 9 sections, 3 theorems, 18 equations, 2 figures.

Key Result

Theorem 1

Assume ass:H1--ass:H4. For any $\delta>0$, choose $\underline{T}\in\mathrm{Poly}(n^{-1})$ and $\overline{T}\asymp\log n$. Then there exists a family $\{\mathcal{S}_i\}_{i=1}^K$ of sparse ReLU neural network classes such that the reflected diffusion generative algorithm driven by the empirical denois where $\mathcal{L}(\overline{X}^{\widehat{s}_n}_{\overline{T}-\underline{T}})$ denotes the law of t

Figures (2)

  • Figure 2.1: Graph of the function $\widehat{f}$ from Lemma \ref{['lem:explicit_solution']}. The function reflects the identity between the lines $y=0$ and $y=1$; applied component-wise, $f$ therefore essentially reflects the identity at the boundary $\partial[0,1]^D$.
  • Figure 2.2: Simulation of a reflected Brownian motion (green) along with the non-reflected version (blue) that is used for its construction using Lemma \ref{['lem:explicit_solution']}.

Theorems & Definitions (3)

  • Theorem : informal
  • Lemma 2.1
  • Lemma 2.2