Table of Contents
Fetching ...

A Distribution-to-Distribution Neural Probabilistic Forecasting Framework for Dynamical Systems

Tianlin Yang, Hailiang Du, Louis Aslett

Abstract

Probabilistic forecasting provides a principled framework for uncertainty quantification in dynamical systems by representing predictions as probability distributions rather than deterministic trajectories. However, existing forecasting approaches, whether physics-based or neural-network-based, remain fundamentally trajectory-oriented: predictive distributions are usually accessed through ensembles or sampling, rather than evolved directly as dynamical objects. A distribution-to-distribution (D2D) neural probabilistic forecasting framework is developed to operate directly on predictive distributions. The framework introduces a distributional encoding and decoding structure around a replaceable neural forecasting module, using kernel mean embeddings to represent input distributions and mixture density networks to parameterise output predictive distributions. This design enables recursive propagation of predictive uncertainty within a unified end-to-end neural architecture, with model training and evaluation carried out directly in terms of probabilistic forecast skill. The framework is demonstrated on the Lorenz63 chaotic dynamical system. Results show that the D2D model captures nontrivial distributional evolution under nonlinear dynamics, produces skillful probabilistic forecasts without explicit ensemble simulation, and remains competitive with, and in some cases outperforms, a simplified perfect model benchmark. These findings point to a new paradigm for probabilistic forecasting, in which predictive distributions are learned and evolved directly rather than reconstructed indirectly through ensemble-based uncertainty propagation.

A Distribution-to-Distribution Neural Probabilistic Forecasting Framework for Dynamical Systems

Abstract

Probabilistic forecasting provides a principled framework for uncertainty quantification in dynamical systems by representing predictions as probability distributions rather than deterministic trajectories. However, existing forecasting approaches, whether physics-based or neural-network-based, remain fundamentally trajectory-oriented: predictive distributions are usually accessed through ensembles or sampling, rather than evolved directly as dynamical objects. A distribution-to-distribution (D2D) neural probabilistic forecasting framework is developed to operate directly on predictive distributions. The framework introduces a distributional encoding and decoding structure around a replaceable neural forecasting module, using kernel mean embeddings to represent input distributions and mixture density networks to parameterise output predictive distributions. This design enables recursive propagation of predictive uncertainty within a unified end-to-end neural architecture, with model training and evaluation carried out directly in terms of probabilistic forecast skill. The framework is demonstrated on the Lorenz63 chaotic dynamical system. Results show that the D2D model captures nontrivial distributional evolution under nonlinear dynamics, produces skillful probabilistic forecasts without explicit ensemble simulation, and remains competitive with, and in some cases outperforms, a simplified perfect model benchmark. These findings point to a new paradigm for probabilistic forecasting, in which predictive distributions are learned and evolved directly rather than reconstructed indirectly through ensemble-based uncertainty propagation.

Paper Structure

This paper contains 11 sections, 13 equations, 5 figures.

Figures (5)

  • Figure 1: Schematic illustration of the proposed D2D neural network architecture.
  • Figure 2: An example of the evolution of the probability density function (PDF) of the $x$-variable. Panel (a) shows the reference distributional evolution under the underlying true dynamical system, approximated empirically by propagating a large ensemble of initial conditions under the numerically simulated Lorenz63 system. Panel (b) shows the corresponding PDF evolution generated by the proposed D2D model trained with the iterative strategy. Red dots indicate the corresponding observed values at each lead time.
  • Figure 3: Logarithmic score skill relative to climatology as a function of lead time for iterative D2D forecasts and perfect model forecasts. Panels (a)-(d) correspond to observational noise levels of 0.01, 0.02, 0.04, and 0.08, respectively. The horizontal black dashed line represents the zero skill climatological forecast. The red curve denotes the perfect model forecast based on a 128-member ensemble propagated under the numerically simulated Lorenz63 system and converted to a predictive distribution using Gaussian kernel dressingroulston2003combining. The remaining curves correspond to iterative D2D models trained with maximum curriculum lead times of $16\Delta t$, $32\Delta t$, $64\Delta t$, and $128\Delta t$. Shaded bands indicate 95% bootstrap resampling intervals computed on the test set.
  • Figure 4: Evolution of the predictive PDF for the $x$-variable generated by the $32\Delta t$ iterative D2D model, using the same initial condition as in Fig. \ref{['fig:evolve']}. Panel (a) corresponds to observational noise level $0.01$, and panel (b) corresponds to observational noise level $0.08$.
  • Figure 5: Logarithmic score skill relative to climatology for the direct and iterative D2D strategies under different observational noise levels. Panels (a)--(d) correspond to the same observational noise levels as in Fig. \ref{['fig:loss_it_fig']}. The blue curve denotes the iterative D2D model trained with maximum curriculum lead time $128\Delta t$. The red curve denotes the direct-forecast strategy, constructed from multiple direct D2D models trained at fixed lead times $1\Delta t$, $2\Delta t$, $4\Delta t$, $\ldots$, $128\Delta t$ and composed through a hierarchical temporal aggregation strategy to produce forecasts at arbitrary lead times.