Table of Contents
Fetching ...

Short-Term Turbulence Prediction for Seeing Using Machine Learning

Mary Joe Medlej, Rahul Srinivasan, Simon Prunet, Aziz Ziad, Christophe Giordano

Abstract

Optical turbulence, driven by fluctuations of the atmospheric refractive index, poses a significant challenge to ground-based optical systems, as it distorts the propagation of light. This degradation affects both astronomical observations and free-space optical communications. While adaptive optics systems correct turbulence effects in real-time, their reactive nature limits their effectiveness under rapidly changing conditions, underscoring the need for predictive solutions. In this study, we address the problem of short-term turbulence forecasting by leveraging machine learning models to predict the atmospheric seeing parameter up to two hours in advance. We compare statistical and deep learning approaches, with a particular focus on probabilistic models that not only produce accurate forecasts but also quantify predictive uncertainty, crucial for robust decision-making in dynamic environments. Our evaluation includes Gaussian processes (GP) for statistical modeling, recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) as deterministic baselines, and our novel implementation of a normalizing flow for time series (FloTS) as a flexible probabilistic deep learning method. All models are trained exclusively on historical seeing data, allowing for a fair performance comparison. We show that FloTS achieves the best overall balance between predictive accuracy and well-calibrated uncertainty.

Short-Term Turbulence Prediction for Seeing Using Machine Learning

Abstract

Optical turbulence, driven by fluctuations of the atmospheric refractive index, poses a significant challenge to ground-based optical systems, as it distorts the propagation of light. This degradation affects both astronomical observations and free-space optical communications. While adaptive optics systems correct turbulence effects in real-time, their reactive nature limits their effectiveness under rapidly changing conditions, underscoring the need for predictive solutions. In this study, we address the problem of short-term turbulence forecasting by leveraging machine learning models to predict the atmospheric seeing parameter up to two hours in advance. We compare statistical and deep learning approaches, with a particular focus on probabilistic models that not only produce accurate forecasts but also quantify predictive uncertainty, crucial for robust decision-making in dynamic environments. Our evaluation includes Gaussian processes (GP) for statistical modeling, recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) as deterministic baselines, and our novel implementation of a normalizing flow for time series (FloTS) as a flexible probabilistic deep learning method. All models are trained exclusively on historical seeing data, allowing for a fair performance comparison. We show that FloTS achieves the best overall balance between predictive accuracy and well-calibrated uncertainty.

Paper Structure

This paper contains 15 sections, 26 equations, 10 figures, 1 table.

Figures (10)

  • Figure 1: RMSE for each training duration: 2 hours (blue), 3 hours (orange), 4 hours (green), 5 hours (red), and 6 hours (purple), evaluated over a fixed 2-hour prediction window. Results are shown for all models: RNN, LSTM, GP, and FloTS (FLOW).
  • Figure 2: Illustration of the calculation and calibration of the coverage for an example: predictions for +10 and $\pm20$ minutes. Left: examples of different observations (green star) and the corresponding predicted probability distribution shown by two contours of nominal coverage 0.2 (inner, red) and 0.80 (outer, blue). Right: the corresponding uncalibrated PP-plot of the empirical vs nominal coverage. The dashed black diagonal line is the ideal coverage, and the red (blue) curve represents under-(over) confident regions. The two figures together illustrate that the contours of nominal coverage 0.2 (0.80) have an empirical coverage of 0.32 (0.68), indicating under-(over) confidence. The ideal calibration temperature vector, a function of the nominal coverage, contracts (expands) the probability density of under-(over) confident regions in the parameter space.
  • Figure 3: Comparison of the PP plots from the uncalibrated (purple) and calibrated (blue) FloTS predictions. The 45$^\circ$ black dashed line represents the ideal coverage. The green dot-dashed line, which corresponds to the secondary y-axis, shows the optimal temperature as a function of nominal coverage. The calibration temperature is parameterized according to Equation \ref{['eq:calib_temp_polynomial']}.
  • Figure 4: Same as Figure \ref{['fig:coverage_flow']} for GP predictions. An alternative calibration is presented in Appendix \ref{['sec:appendix_piecewiselinear']}.
  • Figure 5: Forecasting performance of all models trained using a 2-hour input window. Left: RMSE, and Right: Pearson correlation coefficient, both computed between the predicted and observed seeing values and plotted as a function of forecast lead time.
  • ...and 5 more figures