Table of Contents
Fetching ...

Data-Driven Computing Methods for Nonlinear Physics Systems with Geometric Constraints

Yunjin Tong

TL;DR

A novel, data-driven framework that synergizes physics-based priors with advanced ML techniques to address the computational and practical limitations inherent in first-principle-based methods and brute-force machine learning methods is introduced.

Abstract

In a landscape where scientific discovery is increasingly driven by data, the integration of machine learning (ML) with traditional scientific methodologies has emerged as a transformative approach. This paper introduces a novel, data-driven framework that synergizes physics-based priors with advanced ML techniques to address the computational and practical limitations inherent in first-principle-based methods and brute-force machine learning methods. Our framework showcases four algorithms, each embedding a specific physics-based prior tailored to a particular class of nonlinear systems, including separable and nonseparable Hamiltonian systems, hyperbolic partial differential equations, and incompressible fluid dynamics. The intrinsic incorporation of physical laws preserves the system's intrinsic symmetries and conservation laws, ensuring solutions are physically plausible and computationally efficient. The integration of these priors also enhances the expressive power of neural networks, enabling them to capture complex patterns typical in physical phenomena that conventional methods often miss. As a result, our models outperform existing data-driven techniques in terms of prediction accuracy, robustness, and predictive capability, particularly in recognizing features absent from the training set, despite relying on small datasets, short training periods, and small sample sizes.

Data-Driven Computing Methods for Nonlinear Physics Systems with Geometric Constraints

TL;DR

A novel, data-driven framework that synergizes physics-based priors with advanced ML techniques to address the computational and practical limitations inherent in first-principle-based methods and brute-force machine learning methods is introduced.

Abstract

In a landscape where scientific discovery is increasingly driven by data, the integration of machine learning (ML) with traditional scientific methodologies has emerged as a transformative approach. This paper introduces a novel, data-driven framework that synergizes physics-based priors with advanced ML techniques to address the computational and practical limitations inherent in first-principle-based methods and brute-force machine learning methods. Our framework showcases four algorithms, each embedding a specific physics-based prior tailored to a particular class of nonlinear systems, including separable and nonseparable Hamiltonian systems, hyperbolic partial differential equations, and incompressible fluid dynamics. The intrinsic incorporation of physical laws preserves the system's intrinsic symmetries and conservation laws, ensuring solutions are physically plausible and computationally efficient. The integration of these priors also enhances the expressive power of neural networks, enabling them to capture complex patterns typical in physical phenomena that conventional methods often miss. As a result, our models outperform existing data-driven techniques in terms of prediction accuracy, robustness, and predictive capability, particularly in recognizing features absent from the training set, despite relying on small datasets, short training periods, and small sample sizes.

Paper Structure

This paper contains 57 sections, 4 theorems, 80 equations, 19 figures, 3 tables, 3 algorithms.

Key Result

Theorem 4.1

The network eq:Tp_Taylor satisfies eq:partial_T.

Figures (19)

  • Figure 1: The schematic diagram of $\bm T_p(\bm p,\bm \theta_p)$ in Taylor-net. Source: tong2021symplectic.
  • Figure 2: The schematic diagram of Taylor-net. The input of Taylor-net is $(\bm q_0,\bm p_0)$, and the output is $(\bm q_n,\bm p_n)$. Taylor-net consists of $n$ iterations of fourth-order symplectic integrator. The input of the integrator is $(\bm q_{i-1},\bm p_{i-1})$, and the output is $(\bm q_{i},\bm p_{i})$. The four intermediate variables $\bm t_p^0\cdots \bm t_p^4$ and $\bm k_q^0\cdots \bm k_q^4$ show that the scheme is fourth-order. Source: tong2021symplectic.
  • Figure 3: (a) The forward pass of an NSSNN is composed of a forward pass through a differentiable symplectic integrator as well as a backpropagation step through the model. (b) The schematic diagram of NSSNN. Source: xiong2020nonseparable.
  • Figure 4: Comparison between NSSNN and HNN regarding the network design and prediction results of a vortex flow example. Source: xiong2020nonseparable.
  • Figure 5: Schematic diagram of RoeNet to predict future discontinuity from smooth observations. The blue band shows the distribution of the training set with respect to time, and the training set does not necessarily contain discontinuous solutions to the equations. Meanwhile, the orange band represents the solutions predicted with RoeNet, which may contain discontinuous solutions. Source: tong2024roenet.
  • ...and 14 more figures

Theorems & Definitions (7)

  • Theorem 4.1
  • proof
  • Theorem 4.2
  • proof
  • Theorem 4.3
  • Theorem 4.4
  • proof