Table of Contents
Fetching ...

Goal oriented error estimation for adaptive sampling of PINNS

Medard Govoeyi, Thomas Richter

Abstract

Physics-Informed Neural Networks (PINNs) are mesh-free approaches for the numerical approximation of partial differential equations, where a neural network is trained by minimizing a loss function derived from the governing equations and boundary conditions. The Deep Ritz method can be interpreted as a particular variational form of a PINN, where the loss corresponds to the minimization of an energy functional associated with a symmetric positive definite problem. In this work, we study the approximation of the Laplace equation using both the classical PINN formulation and its variational counterpart, the Deep Ritz method, with the objective of accurately estimating prescribed goal functionals. When standard sampling strategies, such as uniform or loss-based sampling, are employed during training, the convergence of the functional error and the attained minimal functional value can be slow. To address this issue, we introduce a functional-oriented importance sampling strategy that can be applied to both PINNs and the Deep Ritz method. The key ingredient is the construction of a reliable and accurate estimator for the error in a given quantity of interest. This estimator is derived using concepts from the Dual Weighted Residual (DWR) framework and is implemented entirely within the neural network setting. It is then used to adaptively guide the sampling of training points in the computational domain, focusing computational effort on regions that have the strongest influence on the functional value. Numerical experiments demonstrate that the proposed adaptive sampling strategy significantly accelerates the convergence of the functional error and improves the minimization of the target functional during training for both PINN and Deep Ritz formulations.

Goal oriented error estimation for adaptive sampling of PINNS

Abstract

Physics-Informed Neural Networks (PINNs) are mesh-free approaches for the numerical approximation of partial differential equations, where a neural network is trained by minimizing a loss function derived from the governing equations and boundary conditions. The Deep Ritz method can be interpreted as a particular variational form of a PINN, where the loss corresponds to the minimization of an energy functional associated with a symmetric positive definite problem. In this work, we study the approximation of the Laplace equation using both the classical PINN formulation and its variational counterpart, the Deep Ritz method, with the objective of accurately estimating prescribed goal functionals. When standard sampling strategies, such as uniform or loss-based sampling, are employed during training, the convergence of the functional error and the attained minimal functional value can be slow. To address this issue, we introduce a functional-oriented importance sampling strategy that can be applied to both PINNs and the Deep Ritz method. The key ingredient is the construction of a reliable and accurate estimator for the error in a given quantity of interest. This estimator is derived using concepts from the Dual Weighted Residual (DWR) framework and is implemented entirely within the neural network setting. It is then used to adaptively guide the sampling of training points in the computational domain, focusing computational effort on regions that have the strongest influence on the functional value. Numerical experiments demonstrate that the proposed adaptive sampling strategy significantly accelerates the convergence of the functional error and improves the minimization of the target functional during training for both PINN and Deep Ritz formulations.

Paper Structure

This paper contains 26 sections, 1 theorem, 69 equations, 9 figures, 1 algorithm.

Key Result

Lemma 1

Let $u_\theta \in V_{\mathcal{N}}$ be parameterized by Since the parameter space $\Theta$ has a linear structure, any (not necessarily unique) minimizer $u_\theta$ of Eq-min-energy can be characterized as follows In terms of neural network functions, this characterization corresponds to the Euler-Lagrange equation where In general $V_{\cal N}\neq V_{\cal N}'(u_\theta)$. $\blacktriangleleft$$\b

Figures (9)

  • Figure 1: Case 1: Evolution of the functional error and its estimators during training. The vertical dashed red line indicates the iteration at which resampling is performed.
  • Figure 2: (a) Pointwise functional error, (b) corresponding pointwise error estimator (b), (c) corresponding pointwise improved error estimator with localization.
  • Figure 3: Case 1: Comparison of uniform sampling (left) vs. sampling using the DWR measure $p_\mu$ (right).
  • Figure 4: Case 1: Absolute value of the functional error $|\mathcal{E}(u)| = |J(u) - J(u_\theta)|$ during the training. The vertical dashed red line indicates the iteration at which resampling is performed.
  • Figure 5: Case 2: Absolute value of the functional error $|\mathcal{E}(u)| = |J(u) - J(u_{\mathcal{N}})|$ during the training. The vertical dashed red line indicates the iteration at which resampling is performed.
  • ...and 4 more figures

Theorems & Definitions (4)

  • Lemma 1: Variational Structure of Deep Ritz
  • Remark 1: Approximation of the adjoint solution
  • Remark 2: Choice of the adjoint network
  • Remark 3: Error localization