Table of Contents
Fetching ...

Prediction of large-scale dynamics using unresolved computations

A. J. Chorin, A. Kast, R. Kupferman

TL;DR

The paper develops an ensemble-based framework to predict large-scale dynamics of underresolved PDEs using an invariant prior and a finite set of collective variables $U_\alpha$. It derives the effective evolution equation $\dfrac{dV_\alpha}{dt} = \left\langle (g_\alpha, F(u)) \right\rangle_{V(t)}$, enabling a closed system for the means of the chosen observables. Conditional expectations are computed analytically under Gaussian priors (and Gaussian-approximated non-Gaussian priors) to render the right-hand side practical. The approach is demonstrated on a linear Schrödinger equation and a nonlinear Hamiltonian system, showing that a small number of kernels yields accurate mean evolution at a fraction of the cost, while also discussing its limitations and avenues for refinement (e.g., higher moments, time-varying kernels).

Abstract

We present a theoretical framework and numerical methods for predicting the large-scale properties of solutions of partial differential equations that are too complex to be properly resolved. We assume that prior statistical information about the distribution of the solutions is available, as is often the case in practice. The quantities we can compute condition the prior information and allow us to calculate mean properties of solutions in the future. We derive approximate ways for computing the evolution of the probabilities conditioned by what we can compute, and obtain ordinary differential equations for the expected values of a set of large-scale variables. Our methods are demonstrated on two simple but instructive examples, where the prior information consists of invariant canonical distributions

Prediction of large-scale dynamics using unresolved computations

TL;DR

The paper develops an ensemble-based framework to predict large-scale dynamics of underresolved PDEs using an invariant prior and a finite set of collective variables . It derives the effective evolution equation , enabling a closed system for the means of the chosen observables. Conditional expectations are computed analytically under Gaussian priors (and Gaussian-approximated non-Gaussian priors) to render the right-hand side practical. The approach is demonstrated on a linear Schrödinger equation and a nonlinear Hamiltonian system, showing that a small number of kernels yields accurate mean evolution at a fraction of the cost, while also discussing its limitations and avenues for refinement (e.g., higher moments, time-varying kernels).

Abstract

We present a theoretical framework and numerical methods for predicting the large-scale properties of solutions of partial differential equations that are too complex to be properly resolved. We assume that prior statistical information about the distribution of the solutions is available, as is often the case in practice. The quantities we can compute condition the prior information and allow us to calculate mean properties of solutions in the future. We derive approximate ways for computing the evolution of the probabilities conditioned by what we can compute, and obtain ordinary differential equations for the expected values of a set of large-scale variables. Our methods are demonstrated on two simple but instructive examples, where the prior information consists of invariant canonical distributions

Paper Structure

This paper contains 6 sections, 3 theorems, 81 equations, 5 figures.

Key Result

Lemma 1

The conditional expectation of the function $u(x)$ is a linear form in the conditioning data $V$: where the vector of functions $c_\alpha(x)$ is given by and where the $m^{-1}_{\beta\alpha}$ are the entries of an $N\times N$ matrix $M^{-1}$ whose inverse $M$ has entries

Figures (5)

  • Figure 1: Example of regression functions for the linear Schrödinger equation. Values for five collective variables were chosen, representing local averages of $p(x)$ on a uniformly spaced grid. The kernels are translates of each other and have Gaussian profiles of width $\sigma$ centered at the grid points. The lines represent the regression function, or optimal interpolant $\left\langle p(x) \right\rangle_V$ given by equation (\ref{['Lin:Interpolation']}) for $\sigma=\Delta x$ (solid), $\sigma=0.5\,\Delta x$ (dashed), and $\sigma=0.1\,\Delta x$ (dash-dot).
  • Figure 2: Mean evolution of the collective variable $U^p_1[p(\cdot),q(\cdot)]$ for $N=5$, and a random choice of the initial data $V^p$ and $V^q$. The open dots represent the exact solution (\ref{['Lin:Exact3']}), whereas the lines represent the approximate solution obtained by an integration of the set of $10$ ordinary differential equations (\ref{['Lin:Effective']}). The three graphs are for different values of the kernel width $\sigma$: (a) $\sigma=\Delta x$, (b) $\sigma=0.5\,\Delta x$, and (c) $\sigma=0.1\,\Delta x$.
  • Figure 3: The covariance $\left\langle p(i)\,p(j) \right\rangle = \left\langle q(i)\,q(j) \right\rangle$ as function of the grid separation $i-j$ for the non-Gaussian probability distribution (\ref{['NSE:Prior']}) with $n=16$. These values were computed by a Metropolis Monte-Carlo simulation.
  • Figure 4: Evolution in time of the mean value of the four collective variable: $V^p_1$ ($\blacktriangledown$), $V^p_2$ ($\blacktriangle$), $V^q_1$ ($\blacksquare$), and $V^q_2$ ($\blacklozenge$). The symbols represent the values of these quantities obtained by solving the $32$ equations (\ref{['NSE:Equation']}) for $10^4$ initial conditions compatible with the initial data, and averaging. The solid lines are the values of the four corresponding functions obtained by integrating equation (\ref{['NSE:Approx']}). Figures (a) and (b) are for the time intervals $[0,1]$ and $[0,10]$ respectively.
  • Figure 5: Evolution of the distribution of the collective variable $U^p_1$. The $x$-axis represents time, the $y$-axis represents the value of $U^p_1$, and the $z$-axis is proportional to the density of states that correspond to the same value of $U^p_1$ at the given time.

Theorems & Definitions (3)

  • Lemma 1
  • Lemma 2
  • Lemma 3