Table of Contents
Fetching ...

Learning Mesh-Free Discrete Differential Operators with Self-Supervised Graph Neural Networks

Lucas Gerken Starepravo, Georgios Fourtakas, Steven Lind, Ajay B. Harish, Tianning Tang, Jack R. C. King

Abstract

Mesh-free numerical methods provide flexible discretisations for complex geometries; however, classical meshless discrete differential operators typically trade low computational cost for limited accuracy or high accuracy for substantial per-stencil computation. We introduce a parametrised framework for learning mesh-free discrete differential operators using a graph neural network trained via polynomial moment constraints derived from truncated Taylor expansions. The model maps local stencils relative positions directly to discrete operator weights. The current work demonstrates that neural networks can learn classical polynomial consistency while retaining robustness to irregular neighbourhood geometry. The learned operators depend only on local geometry, are resolution-agnostic, and can be reused across particle configurations and governing equations. We evaluate the framework using standard numerical analysis diagnostics, showing improved accuracy over Smoothed Particle Hydrodynamics, and a favourable accuracy-cost trade-off relative to a representative high-order consistent mesh-free method in the moderate-accuracy regime. Applicability is demonstrated by solving the weakly compressible Navier-Stokes equations using the learned operators.

Learning Mesh-Free Discrete Differential Operators with Self-Supervised Graph Neural Networks

Abstract

Mesh-free numerical methods provide flexible discretisations for complex geometries; however, classical meshless discrete differential operators typically trade low computational cost for limited accuracy or high accuracy for substantial per-stencil computation. We introduce a parametrised framework for learning mesh-free discrete differential operators using a graph neural network trained via polynomial moment constraints derived from truncated Taylor expansions. The model maps local stencils relative positions directly to discrete operator weights. The current work demonstrates that neural networks can learn classical polynomial consistency while retaining robustness to irregular neighbourhood geometry. The learned operators depend only on local geometry, are resolution-agnostic, and can be reused across particle configurations and governing equations. We evaluate the framework using standard numerical analysis diagnostics, showing improved accuracy over Smoothed Particle Hydrodynamics, and a favourable accuracy-cost trade-off relative to a representative high-order consistent mesh-free method in the moderate-accuracy regime. Applicability is demonstrated by solving the weakly compressible Navier-Stokes equations using the learned operators.

Paper Structure

This paper contains 27 sections, 31 equations, 16 figures, 6 tables.

Figures (16)

  • Figure 1: We learn the mapping from relative position of particles within a local neighbourhood, to a local set of weights that approximate differential operators. The learned operator is physics-agnostic, and local to the computational stencil. The framework consists of a lifting the relative positions to a higher dimension with a neural network, followed by a stack of message-passing graph layers, and a final output head that maps latent representations to operator weights.
  • Figure 2: Second-order normalised learned operators, colours indicate weight value. Models were trained with particle disturbance of $\epsilon=1.0$ and inferred with the same noise.
  • Figure 3: Convergence of the discrete $x$-derivative (left) and Laplacian (right) on a smooth test function with $\epsilon = 0.5$, noting that NeMDO was trained with $\epsilon = 1.0$. Relative $L_2$ error versus particle spacing $s$ for the learned NeMDO operator, a second-order LABFM operator, and two uncorrected SPH kernels (quintic spline and Wendland C2).
  • Figure 4: Normalised eigenvalue spectrum of the global $x$-derivative operator with node disturbance $\epsilon = 1.0$ and 2500 nodes. Left: $x$-derivative operators; right: Laplacian operators.
  • Figure 5: Resolving power of the $x-$derivative for LABFM, NeMDO, and SPH (with the Quintic Spline and Wendland C2). The left plot shows the real part of the modal response, and the right plot shows the imaginary part on disordered particle distributions with noise level $\epsilon$. The horizontal axis shows the true wavenumber $k$ normalised by the Nyquist wavenumber $k_{\mathrm{Ny}}$, while the vertical axis shows the corresponding effective wavenumber $k_{\mathrm{eff}}/k_{\mathrm{Ny}}$. The dashed black line indicates the ideal response, corresponding to a spectral method. The inset on the left plot shows the absolute difference between the different operators and the ideal spectral response. The inset on the right plot shows zoomed in graph of $\Im\{\hat{k}_{eff}\}$ for range $0\leq\hat{k}\leq 0.3$. The full and dotted lines indicate $k_y/k_x=0$ and $k_y/k_x=1$, respectively.
  • ...and 11 more figures