Table of Contents
Fetching ...

Stochastic Dimension Implicit Functional Projections for Exact Integral Conservation in High-Dimensional PINNs

Zhangyong Liang

Abstract

Enforcing exact macroscopic conservation laws, such as mass and energy, in neural partial differential equation (PDE) solvers is computationally challenging in high dimensions. Traditional discrete projections rely on deterministic quadrature that scales poorly and restricts mesh-free formulations like PINNs. Furthermore, high-order operators incur heavy memory overhead, and generic optimization often lacks convergence guarantees for non-convex conservation manifolds. To address this, we propose the Stochastic Dimension Implicit Functional Projection (SDIFP) framework. Instead of projecting discrete vectors, SDIFP applies a global affine transformation to the continuous network output. This yields closed-form solutions for integral constraints via detached Monte Carlo (MC) quadrature, bypassing spatial grid dependencies. For scalable training, we introduce a doubly-stochastic unbiased gradient estimator (DS-UGE). By decoupling spatial sampling from differential operator subsampling, the DS-UGE reduces memory complexity from $\mathcal{O}(M \times N_{\mathcal{L}})$ to $\mathcal{O}(N \times |\mathcal{I}|)$. SDIFP mitigates sampling variance, preserves solution regularity, and maintains $\mathcal{O}(1)$ inference efficiency, providing a scalable, mesh-free approach for solving conservative high-dimensional PDEs.

Stochastic Dimension Implicit Functional Projections for Exact Integral Conservation in High-Dimensional PINNs

Abstract

Enforcing exact macroscopic conservation laws, such as mass and energy, in neural partial differential equation (PDE) solvers is computationally challenging in high dimensions. Traditional discrete projections rely on deterministic quadrature that scales poorly and restricts mesh-free formulations like PINNs. Furthermore, high-order operators incur heavy memory overhead, and generic optimization often lacks convergence guarantees for non-convex conservation manifolds. To address this, we propose the Stochastic Dimension Implicit Functional Projection (SDIFP) framework. Instead of projecting discrete vectors, SDIFP applies a global affine transformation to the continuous network output. This yields closed-form solutions for integral constraints via detached Monte Carlo (MC) quadrature, bypassing spatial grid dependencies. For scalable training, we introduce a doubly-stochastic unbiased gradient estimator (DS-UGE). By decoupling spatial sampling from differential operator subsampling, the DS-UGE reduces memory complexity from to . SDIFP mitigates sampling variance, preserves solution regularity, and maintains inference efficiency, providing a scalable, mesh-free approach for solving conservative high-dimensional PDEs.

Paper Structure

This paper contains 24 sections, 29 equations, 10 figures, 2 tables.

Figures (10)

  • Figure 1: Conservation error comparison under fixed-grid and random collocation.
  • Figure 1: Time evolution of conserved quantities under fixed grid sampling.
  • Figure 2: Comparison of two projection strategies under random collocation. (a) Traditional projection: a fixed quadrature set causes deviation from $\bar{c}_1(t)$ when the collocation set varies. (b) SDIFP projection: adaptive shifts align the projected mean with $\bar{c}_1(t)$ on every batch.
  • Figure 2: Time evolution of conserved quantities under random collocation sampling.
  • Figure 3: Momentum and energy conservation errors for representative PINN-based methods under fixed-grid and random collocation (1D validation).
  • ...and 5 more figures