Table of Contents
Fetching ...

Divergence-free Linearized Neural Networks: Integral Representation and Optimal Approximation Rates

Juncai He, Xinliang Liu, Zitong Tian

Abstract

This paper studies the numerical approximation of divergence-free vector fields by linearized shallow neural networks, also referred to as random feature models or finite neuron spaces. Combining the stable potential lifting for divergence-free fields with the scalar Sobolev integral representation theory via ReLU$^k$ networks, we derive a core integral representation of divergence-free Sobolev vector fields through antisymmetric potentials parameterized by linearized ReLU$^k$ neural networks. This representation, together with a quasi-uniform distribution argument for the inner parameters, yields optimal approximation rates for such linearized ReLU$^k$ neural networks under an exact divergence-free constraint. Numerical experiments in two and three spatial dimensions, including $L^2$ projection and steady Stokes problems, confirm the theoretical rates and illustrate the effectiveness of exactly divergence-free conditions in computation.

Divergence-free Linearized Neural Networks: Integral Representation and Optimal Approximation Rates

Abstract

This paper studies the numerical approximation of divergence-free vector fields by linearized shallow neural networks, also referred to as random feature models or finite neuron spaces. Combining the stable potential lifting for divergence-free fields with the scalar Sobolev integral representation theory via ReLU networks, we derive a core integral representation of divergence-free Sobolev vector fields through antisymmetric potentials parameterized by linearized ReLU neural networks. This representation, together with a quasi-uniform distribution argument for the inner parameters, yields optimal approximation rates for such linearized ReLU neural networks under an exact divergence-free constraint. Numerical experiments in two and three spatial dimensions, including projection and steady Stokes problems, confirm the theoretical rates and illustrate the effectiveness of exactly divergence-free conditions in computation.

Paper Structure

This paper contains 39 sections, 5 theorems, 93 equations, 11 figures, 11 tables.

Key Result

Theorem 2.1

The Sobolev space $H^{\frac{d+2k+1}{2}}(\Omega)$ is the RKHS associated with the ReLU$^k$ features. It can be represented as: In particular, for any $f\in H^{\frac{d+2k+1}{2}}(\Omega)$, there exists a $\psi\in L^2(\mathbb{S}^d)$ (that may not be unique) such that Furthermore,

Figures (11)

  • Figure 1: Divergence-free $L^2$ approximation: relative $L^2$ error decay with number of neurons. (Left) $d=2$. (Right) $d=3$. Each solid curve is accompanied by a dashed line of corresponding color, whose slope equals the theoretical upper bound on the convergence rate.
  • Figure 2: Solving Stokes equation in div-free FNS: relative $\dot{H}^1$ seminorm error decay with number of neurons. (Left) $d=2$. (Right) $d=3$. Dashed lines in matching colors show the theoretical upper bound slopes for reference.
  • Figure 3: Solving classical lid-driven cavity flow by div-free FNS, $k=2, n=3202$.
  • Figure 4: Solving regularized lid-driven cavity flow by divergence-free FNS, $k=2, n=3202$.
  • Figure 5: Solving regularized lid-driven cavity flow by divergence-free FNS compared with finite element method: $L^2$ error and $\dot{H}^1$ seminorm error decay with degree of freedom. Left: $L^2$ error. Right: $\dot{H}^1$ seminorm error.
  • ...and 6 more figures

Theorems & Definitions (12)

  • Theorem 2.1: Integral representation of scalar Sobolev space, Theorem 2.3 of liu_integralrepresentationssobolev_2025
  • Theorem 2.2: Linear Approximation Rate, Theorem 2.2 in liu_integralrepresentationssobolev_2025
  • Remark 2.3
  • Lemma 2.4: Potential representation and $H^{r+1}$-stability
  • proof
  • Theorem 3.1: Integral representation of $H^r_{\mathrm{div}}(\Omega)$
  • proof
  • Definition 4.1: Divergence-free finite neuron space
  • Theorem 4.2: Convergence of $V_{n}^{k}$ for General Dimension $d \ge 2$
  • proof
  • ...and 2 more