Table of Contents
Fetching ...

Learnable Viscosity Modulation in Physics-Informed Neural Networks for Incompressible Flow Reconstruction

Ke Xu, Ze Tao, Fujun Liu

Abstract

Accurately and stably solving the incompressible Navier--Stokes equations with physics-informed neural networks (PINNs) remains challenging, particularly for sparse or noisy observations and for flow regimes in which the local balance among convection, diffusion, and pressure is difficult to capture. To address this issue, we propose a framework, denoted as LVM-PINN, which incorporates a learnable viscosity modulation (LVM) mechanism into the PINN residual. Specifically, the model predicts a spatiotemporal scalar field that is embedded directly into the viscous diffusion term of the momentum equations, thereby enabling adaptive modulation of the local dissipation strength during training. This modification improves optimization stability while enhancing the representation of complex flow structures. The effect of the proposed mechanism is further examined through a controlled ablation setting with an otherwise unchanged network architecture, as well as through comparisons with GRU- and residual-attention-based backbone baselines. Numerical experiments on two-dimensional benchmark problems, including the Kovasznay flow and two manufactured forcing flows, show that the proposed framework yields more stable training behavior and more accurate flow reconstruction under sparse and noisy data conditions.

Learnable Viscosity Modulation in Physics-Informed Neural Networks for Incompressible Flow Reconstruction

Abstract

Accurately and stably solving the incompressible Navier--Stokes equations with physics-informed neural networks (PINNs) remains challenging, particularly for sparse or noisy observations and for flow regimes in which the local balance among convection, diffusion, and pressure is difficult to capture. To address this issue, we propose a framework, denoted as LVM-PINN, which incorporates a learnable viscosity modulation (LVM) mechanism into the PINN residual. Specifically, the model predicts a spatiotemporal scalar field that is embedded directly into the viscous diffusion term of the momentum equations, thereby enabling adaptive modulation of the local dissipation strength during training. This modification improves optimization stability while enhancing the representation of complex flow structures. The effect of the proposed mechanism is further examined through a controlled ablation setting with an otherwise unchanged network architecture, as well as through comparisons with GRU- and residual-attention-based backbone baselines. Numerical experiments on two-dimensional benchmark problems, including the Kovasznay flow and two manufactured forcing flows, show that the proposed framework yields more stable training behavior and more accurate flow reconstruction under sparse and noisy data conditions.

Paper Structure

This paper contains 12 sections, 46 equations, 12 figures, 3 tables.

Figures (12)

  • Figure 1: Training loss histories for the Kovasznay flow under the four compared methods: (a) Nu_learn, (b) Nu_off, (c) ResAttn, and (d) GRU. For each method, the total loss, data loss, and equation loss are shown.
  • Figure 2: Pressure-field reconstruction for the Kovasznay flow. In each row, from left to right: reference solution, prediction, and absolute error. The four rows correspond to (a) Nu_learn, (b) Nu_off, (c) ResAttn, and (d) GRU, respectively.
  • Figure 3: Reconstruction of the streamwise velocity component $u$ for the Kovasznay flow. In each row, from left to right: reference solution, prediction, and absolute error. The four rows correspond to (a) Nu_learn, (b) Nu_off, (c) ResAttn, and (d) GRU, respectively.
  • Figure 4: Reconstruction of the transverse velocity component $v$ for the Kovasznay flow. In each row, from left to right: reference solution, prediction, and absolute error. The four rows correspond to (a) Nu_learn, (b) Nu_off, (c) ResAttn, and (d) GRU, respectively.
  • Figure 5: Training loss histories for manufactured forcing flow I under the four compared methods: (a) Nu_learn, (b) Nu_off, (c) ResAttn, and (d) GRU. For each method, the total loss, data loss, and equation loss are shown.
  • ...and 7 more figures