Table of Contents
Fetching ...

Latent representation learning based model correction and uncertainty quantification for PDEs

Wenwen Zhou, Xiaodong Feng, Ling Guo, Hao Wu

Abstract

Model correction is essential for reliable PDE learning when the governing physics is misspecified due to simplified assumptions or limited observations. In the machine learning literature, existing correction methods typically operate in parameter space, where uncertainty is often quantified via sampling or ensemble-based methods, which can be prohibitive and motivates more efficient representation-level alternatives. To this end, we develop a latent-space model-correction framework by extending our previously proposed LVM-GP solver, which couples latent-variable model with Gaussian processes (GPs) for uncertainty-aware PDE learning. Our architecture employs a shared confidence-aware encoder and two probabilistic decoders, with the solution decoder predicting the solution distribution and the correction decoder inferring a discrepancy term to compensate for model-form errors. The encoder constructs a stochastic latent representation by balancing deterministic features with a GP prior through a learnable confidence function. Conditioned on this shared latent representation, the two decoders jointly quantify uncertainty in both the solution and the correction under soft physics constraints with noisy data. An auxiliary latent-space regularization is introduced to control the learned representation and enhance robustness. This design enables joint uncertainty quantification of both the solution and the correction within a single training procedure, without parameter sampling or repeated retraining. Numerical experiments show accuracy comparable to Ensemble PINNs and B-PINNs, with improved computational efficiency and robustness to misspecified physics.

Latent representation learning based model correction and uncertainty quantification for PDEs

Abstract

Model correction is essential for reliable PDE learning when the governing physics is misspecified due to simplified assumptions or limited observations. In the machine learning literature, existing correction methods typically operate in parameter space, where uncertainty is often quantified via sampling or ensemble-based methods, which can be prohibitive and motivates more efficient representation-level alternatives. To this end, we develop a latent-space model-correction framework by extending our previously proposed LVM-GP solver, which couples latent-variable model with Gaussian processes (GPs) for uncertainty-aware PDE learning. Our architecture employs a shared confidence-aware encoder and two probabilistic decoders, with the solution decoder predicting the solution distribution and the correction decoder inferring a discrepancy term to compensate for model-form errors. The encoder constructs a stochastic latent representation by balancing deterministic features with a GP prior through a learnable confidence function. Conditioned on this shared latent representation, the two decoders jointly quantify uncertainty in both the solution and the correction under soft physics constraints with noisy data. An auxiliary latent-space regularization is introduced to control the learned representation and enhance robustness. This design enables joint uncertainty quantification of both the solution and the correction within a single training procedure, without parameter sampling or repeated retraining. Numerical experiments show accuracy comparable to Ensemble PINNs and B-PINNs, with improved computational efficiency and robustness to misspecified physics.

Paper Structure

This paper contains 15 sections, 40 equations, 23 figures, 4 tables, 1 algorithm.

Figures (23)

  • Figure 1: Schematic of latent-space model correction in the LVM-GP framework. The input $\bm{x}$ is encoded by a confidence-aware encoder to form a stochastic latent code $\bm{z}_{1}(\bm{x},\bm{\omega})$, which is shared by two decoders. The solution decoder is implemented by stacked deep integral layers, producing the predictive mean and uncertainty $(\mu_u,\sigma_u)$ of the solution field $u(\bm{x})$. In parallel, the correction decoder outputs a discrepancy term via the correction network $\mathcal{S}_{\psi}$, and the physics quantity is coupled to the solution through the corrected operator relation $\mu_f=\widetilde{\mathcal{N}}_{\bm{x}}(\mu_u)+\mathcal{S}_{\psi}(\bm{z}_{1}(\bm{x},\bm{\omega}))$. Finally, by leveraging learned variance functions, the model outputs complete probability distributions for the solution $u(\bm{x})$, the source term $f(\bm{x})$, and the correction term $\mathcal{S}_{\psi}(\bm{z}_{1}(\bm{x}, \bm{\omega}))$, quantifying their uncertainties.
  • Figure 2: S1: LVM-GP with the correct physical model
  • Figure 3: S2: LVM-GP with a misspecified physical model
  • Figure 4: S3: Correcting the misspecified physical model by a correction network using LVM-GP
  • Figure 6: S1: LVM-GP with the correct physical model
  • ...and 18 more figures

Theorems & Definitions (3)

  • Remark 3.1
  • Remark 3.2
  • Remark 3.3