Linking Microscopic and Macroscopic Models for Evolution: Markov Chain Network Training and Conservation Law Approximations
Roderick V. N. Melnik
TL;DR
The paper addresses the challenge of unifying neural-network training dynamics with macroscopic conservation-law approximations by casting learning as a perturbed generalized dynamic system ($GDS$) and employing Markov-chain approximations to couple microscopic and macroscopic descriptions. It develops a systems-theoretic framework that combines forward neural-network evolution with backward Markov corrections and derives stability and consistency conditions for network-based conservation-law schemes. Key contributions include a Hamiltonian-approximation approach via neural networks, a two-scale training-dynamics model with ILUMC/ILPMC, and a Markov-chain discretization methodology that recovers classical hyperbolic schemes as special cases, with explicit CFL-type stability criteria. The approach yields constructive, stable numerical methods for dynamic-system modeling and provides a principled link between neural-network training and conservation-law simulations, with potential impact on physics-informed learning and PDE-based modeling.
Abstract
In this paper, a general framework for the analysis of a connection between the training of artificial neural networks via the dynamics of Markov chains and the approximation of conservation law equations is proposed. This framework allows us to demonstrate an intrinsic link between microscopic and macroscopic models for evolution via the concept of perturbed generalized dynamic systems. The main result is exemplified with a number of illustrative examples where efficient numerical approximations follow directly from network-based computational models, viewed here as Markov chain approximations. Finally, stability and consistency conditions of such computational models are discussed.
