Table of Contents
Fetching ...

DiffusionPDE: Generative PDE-Solving Under Partial Observation

Jiahe Huang, Guandao Yang, Zichen Wang, Jeong Joon Park

TL;DR

This work proposes DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces, significantly outperforming the state-of-the-art methods for both forward and inverse directions.

Abstract

We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions.

DiffusionPDE: Generative PDE-Solving Under Partial Observation

TL;DR

This work proposes DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces, significantly outperforming the state-of-the-art methods for both forward and inverse directions.

Abstract

We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions.

Paper Structure

This paper contains 49 sections, 18 equations, 26 figures, 9 tables, 1 algorithm.

Figures (26)

  • Figure 1: We propose DiffusionPDE, a generative PDE solver under partial observations. Given a family of PDE with coefficient (initial state) $a$ and solution (final state) $u$, we train the diffusion model on the joint distribution of $a$ and $u$. During inference, we gradually denoise a Gaussian noise, guided by sparse observation and known PDE function, to recover the full prediction of both $a$ and $u$ that align well with the sparse observations and the given equation.
  • Figure 2: Different from forward and inverse PDE solvers, DiffusionPDE can take sparse observations on either the coefficient $\boldsymbol{a}$ or the solution $\mathbf{u}$ to recover both of them, using one trained network. Here, we show the recovered $\boldsymbol{a}$ and $\mathbf{u}$ of the Darcy's eqaution given sparse observations on $\boldsymbol{a}$, $\mathbf{u}$, or both. Compared with the ground truth, we see that our method successfully recovers the PDE in all cases.
  • Figure 3: Usefulness of PDE loss. We visualize the absolute errors of the recovered coefficient and solution of the Helmholtz equation with and w/o PDE loss. We compare having only the observation loss with applying the additional PDE loss. The errors drop significantly when using PDE loss.
  • Figure 4: We compare DiffusionPDE with state-of-the-art neural PDE solvers li2020fourierli2021physicslu2021learningraissi2019physics. In the forward Navier-Stokes problem, we give $500$ sparse observations of the initial state to solve for the final state. In the inverse set-up, we take observations of the final state and solve for the initial. For the Burgers' equation, we use $5$ sensors throughout all time steps and want to recover the solution at all time steps. Note that we train on neighboring snapshot pairs for the baselines in order to add continuous observations of the Burgers' equation. Results show that existing methods do not support PDE solving under sparse observations, and we believe they are not easily extendable to do so. We refer readers to the supplementary for a complete set of visual results.
  • Figure 5: We compare GraphPDE zhao2022learning and our method for solving the inverse bounded Navier-Stokes equation. Given the boundary conditions and $1\%$ observations of the final vorticity field, we solve the initial vorticity field. We set the fliuds to flow in from the top, with boundary conditions at the edges and a middle cylinder. While GraphPDE can recover the overall pattern of the initial state, it suffers from noise when the fluid passes the cylinder and misses the high vorticities at the bottom.
  • ...and 21 more figures