Table of Contents
Fetching ...

A Geometry-Aware Operator Learning Framework for Interface Problems on Varying Domains

Shanshan Xiao, Ye Li, Zhongyi Huang, Hao Wu

Abstract

Solving Partial Differential Equation (PDE) interface problems on varying domains is a critical task in design and optimization, yet it remains computationally prohibitive for traditional solvers. Although operator learning has shown promise on fixed geometries, its potential for geometry-dependent interface problems has been largely unexplored. To bridge this gap, we propose an extension-based neural operator framework applicable to general linear interface problems. A key innovation of our method is the integration of the Tailored Finite Point Method (TFPM) with our base network, which reduces memory consumption and effectively alleviates the curse of dimensionality. On the theoretical front, we establish the continuity of the Helmholtz operator with respect to domain perturbations and provide rigorous error estimates for the proposed encodings. Comprehensive numerical experiments demonstrate that our framework achieves state-of-the-art accuracy and robustness. Consequently, this work provides a powerful, data-efficient tool for varying-domain simulations, offering new possibilities for real-time shape optimization.

A Geometry-Aware Operator Learning Framework for Interface Problems on Varying Domains

Abstract

Solving Partial Differential Equation (PDE) interface problems on varying domains is a critical task in design and optimization, yet it remains computationally prohibitive for traditional solvers. Although operator learning has shown promise on fixed geometries, its potential for geometry-dependent interface problems has been largely unexplored. To bridge this gap, we propose an extension-based neural operator framework applicable to general linear interface problems. A key innovation of our method is the integration of the Tailored Finite Point Method (TFPM) with our base network, which reduces memory consumption and effectively alleviates the curse of dimensionality. On the theoretical front, we establish the continuity of the Helmholtz operator with respect to domain perturbations and provide rigorous error estimates for the proposed encodings. Comprehensive numerical experiments demonstrate that our framework achieves state-of-the-art accuracy and robustness. Consequently, this work provides a powerful, data-efficient tool for varying-domain simulations, offering new possibilities for real-time shape optimization.

Paper Structure

This paper contains 20 sections, 4 theorems, 62 equations, 6 figures, 2 tables.

Key Result

Theorem 1

The operator $\hat{\mathcal{G}}$ defined by $\mathcal{G}$ in Definition defG is unique, i.e., if there exists another operator $\hat{\mathcal{G}}_2$ satisfying equation operatorG, then $\hat{\mathcal{G}} = \hat{\mathcal{G}}_2$. $\blacktriangleleft$$\blacktriangleleft$

Figures (6)

  • Figure 1: Neural network architecture
  • Figure 2: Prediction results when only the external region is varied. The relative $L^2$ errors for the two experiments are 1.07% and 2.69%, respectively
  • Figure 3: Prediction results for the star-shaped internal interface. The relative $L^2$ error for this experiment is 2.20%.
  • Figure 4: Prediction of 3D Example on TFPM basis. First row: 3D grid point distribution map; Second row: Cross-sectional view at $x = 0.5$.
  • Figure 5: Schematic diagram of transport.
  • ...and 1 more figures

Theorems & Definitions (8)

  • Definition 1
  • Theorem 1: Uniqueness
  • Theorem 2: Continuity of interface under constant interface conditions
  • proof
  • Theorem 3: Continuity of interface under general interface conditions
  • proof
  • Theorem 4: Continuity of domain of Dirichlet zero boundary condition
  • proof