Table of Contents
Fetching ...

Linking Microscopic and Macroscopic Models for Evolution: Markov Chain Network Training and Conservation Law Approximations

Roderick V. N. Melnik

TL;DR

The paper addresses the challenge of unifying neural-network training dynamics with macroscopic conservation-law approximations by casting learning as a perturbed generalized dynamic system ($GDS$) and employing Markov-chain approximations to couple microscopic and macroscopic descriptions. It develops a systems-theoretic framework that combines forward neural-network evolution with backward Markov corrections and derives stability and consistency conditions for network-based conservation-law schemes. Key contributions include a Hamiltonian-approximation approach via neural networks, a two-scale training-dynamics model with ILUMC/ILPMC, and a Markov-chain discretization methodology that recovers classical hyperbolic schemes as special cases, with explicit CFL-type stability criteria. The approach yields constructive, stable numerical methods for dynamic-system modeling and provides a principled link between neural-network training and conservation-law simulations, with potential impact on physics-informed learning and PDE-based modeling.

Abstract

In this paper, a general framework for the analysis of a connection between the training of artificial neural networks via the dynamics of Markov chains and the approximation of conservation law equations is proposed. This framework allows us to demonstrate an intrinsic link between microscopic and macroscopic models for evolution via the concept of perturbed generalized dynamic systems. The main result is exemplified with a number of illustrative examples where efficient numerical approximations follow directly from network-based computational models, viewed here as Markov chain approximations. Finally, stability and consistency conditions of such computational models are discussed.

Linking Microscopic and Macroscopic Models for Evolution: Markov Chain Network Training and Conservation Law Approximations

TL;DR

The paper addresses the challenge of unifying neural-network training dynamics with macroscopic conservation-law approximations by casting learning as a perturbed generalized dynamic system () and employing Markov-chain approximations to couple microscopic and macroscopic descriptions. It develops a systems-theoretic framework that combines forward neural-network evolution with backward Markov corrections and derives stability and consistency conditions for network-based conservation-law schemes. Key contributions include a Hamiltonian-approximation approach via neural networks, a two-scale training-dynamics model with ILUMC/ILPMC, and a Markov-chain discretization methodology that recovers classical hyperbolic schemes as special cases, with explicit CFL-type stability criteria. The approach yields constructive, stable numerical methods for dynamic-system modeling and provides a principled link between neural-network training and conservation-law simulations, with potential impact on physics-informed learning and PDE-based modeling.

Abstract

In this paper, a general framework for the analysis of a connection between the training of artificial neural networks via the dynamics of Markov chains and the approximation of conservation law equations is proposed. This framework allows us to demonstrate an intrinsic link between microscopic and macroscopic models for evolution via the concept of perturbed generalized dynamic systems. The main result is exemplified with a number of illustrative examples where efficient numerical approximations follow directly from network-based computational models, viewed here as Markov chain approximations. Finally, stability and consistency conditions of such computational models are discussed.

Paper Structure

This paper contains 10 sections, 3 theorems, 77 equations, 5 figures.

Key Result

Theorem 3.1

If $H \in \mathbb{L}^1(X_T)$, then for any arbitrary small $\epsilon>0$ there exists a network $\tilde{H}$ such that

Figures (5)

  • Figure 1: A typical unit of network architecture
  • Figure 2: Network architectures and information flow chart for numerical approximations of conservation laws: (a) forward centered Euler, (b) leap-frog, (c) Lax-Wendroff.
  • Figure 3: Flux limiters in the probabilistic approach for conservation laws
  • Figure 4: A schematic representation of flux limiters as functions of velocity
  • Figure 5: Markov chain as a tool to combine feedforward and feedbackwards neural networks

Theorems & Definitions (9)

  • Theorem 3.1
  • Remark 3.1
  • Remark 4.1
  • Proposition 4.1
  • Remark 4.2
  • Definition 5.1
  • Remark 5.1
  • Definition 5.2
  • Proposition 5.1