Table of Contents
Fetching ...

Disentangled Deep Priors for Bayesian Inverse Problems

Arkaprabha Ganguli, Emil Constantinescu

Abstract

We propose a structured prior for high-dimensional Bayesian inverse problems based on a disentangled deep generative model whose latent space is partitioned into auxiliary variables aligned with known and interpretable physical parameters and residual variables capturing remaining unknown variability. This yields a hierarchical prior in which interpretable coordinates carry domain-relevant uncertainty while the residual coordinates retain the flexibility of deep generative models. By linearizing the generator, we characterize the induced prior covariance and derive conditions under which the posterior exhibits approximate block-diagonal structure in the latent variables, clarifying when representation-level disentanglement translates into a separation of uncertainty in the inverse problem. We formulate the resulting latent-space inverse problem and solve it using MAP estimation and Markov chain Monte Carlo (MCMC) sampling. On elliptic PDE inverse problems, such as conductivity identification and source identification, the approach matches an oracle Gaussian process prior under correct specification and provides substantial improvement under prior misspecification, while recovering interpretable physical parameters and producing spatially calibrated uncertainty estimates.

Disentangled Deep Priors for Bayesian Inverse Problems

Abstract

We propose a structured prior for high-dimensional Bayesian inverse problems based on a disentangled deep generative model whose latent space is partitioned into auxiliary variables aligned with known and interpretable physical parameters and residual variables capturing remaining unknown variability. This yields a hierarchical prior in which interpretable coordinates carry domain-relevant uncertainty while the residual coordinates retain the flexibility of deep generative models. By linearizing the generator, we characterize the induced prior covariance and derive conditions under which the posterior exhibits approximate block-diagonal structure in the latent variables, clarifying when representation-level disentanglement translates into a separation of uncertainty in the inverse problem. We formulate the resulting latent-space inverse problem and solve it using MAP estimation and Markov chain Monte Carlo (MCMC) sampling. On elliptic PDE inverse problems, such as conductivity identification and source identification, the approach matches an oracle Gaussian process prior under correct specification and provides substantial improvement under prior misspecification, while recovering interpretable physical parameters and producing spatially calibrated uncertainty estimates.

Paper Structure

This paper contains 49 sections, 6 theorems, 78 equations, 9 figures, 3 tables.

Key Result

Proposition 2.1

Let $X$ and $Y$ be real-valued random variables. Assume that the joint moment generating function is finite on a neighborhood of $(0,0)$; i.e., $\exists \delta>0$ such that $M_{X,Y}(s,t)<\infty$$\forall$$|s|<\delta$ and $|t|<\delta$. Then the following are equivalent: Moreover, if (2) or (3) hold for all pairs $(k,k')$ with $k+k'\le K$, then $p_{X,Y}$ and $p_X p_Y$ agree on all mixed moments of

Figures (9)

  • Figure 1: Disentanglement scatter plots: learned auxiliary latent variables vs. true physical parameters for (a) conductivity and (b) source problems. Generator-level cross-sensitivity is moderate in both cases ($J_{\mathrm{ortho,norm}} = 0.139$ for conductivity, $0.086$ for source). Posterior-level decoupling, measured as mean $|\mathrm{Corr}(u_i, z_{\mathrm{rec},j})|$ from HMC samples, is $0.030$ (max $0.57$) for conductivity and $0.147$ (max $0.47$) for source, indicating moderate to weak residual coupling between the interpretable and residual latent blocks.
  • Figure 2: AuxVAE training loss curves for both problems.
  • Figure 3: Conductivity identification: posterior mean field reconstructions for all four methods. True field, posterior mean, and pointwise standard deviation are shown.
  • Figure 4: Conductivity identification: marginal posterior distributions of the GP hyperparameters $(\mu,\sigma,\ell_x,\ell_y)$ from the AuxVAE. Vertical dashed lines indicate true values.
  • Figure 5: Conductivity identification: HMC trace plots for the AuxVAE auxiliary latent dimensions.
  • ...and 4 more figures

Theorems & Definitions (15)

  • Proposition 2.1: Moment factorization and independence under exponential integrability
  • proof
  • Corollary 2.2: Finite-order decorrelation implies finite-order moment factorization
  • proof
  • Remark 2.3
  • Corollary 2.4: Finite-sample convergence of the polynomial-correlation penalties
  • proof
  • Definition 4.1: Generator tangent subspaces and overlap
  • Lemma 4.2: Linearized induced prior covariance
  • proof
  • ...and 5 more