Table of Contents
Fetching ...

Full Lyapunov Exponents spectrum with Deep Learning from single-variable time series

Carmen Mayora-Cebollero, Ana Mayora-Cebollero, Álvaro Lozano, Roberto Barrio

Abstract

In this article we study if a Deep Learning technique can be used to obtain an approximated value of the Lyapunov exponents of a dynamical system. Moreover, we want to know if Machine Learning techniques are able, once trained, to provide the complete Lyapunov exponents spectrum with just single-variable time series. We train a Convolutional Neural Network and we use the resulting network to approximate the complete spectrum using the time series of just one variable from the studied systems (Lorenz system and coupled Lorenz system). The results are quite stunning as all the values are well approximated with only partial data. This strategy permits to speed up the complete analysis of the systems and also to study the hyperchaotic dynamics in the coupled Lorenz system.

Full Lyapunov Exponents spectrum with Deep Learning from single-variable time series

Abstract

In this article we study if a Deep Learning technique can be used to obtain an approximated value of the Lyapunov exponents of a dynamical system. Moreover, we want to know if Machine Learning techniques are able, once trained, to provide the complete Lyapunov exponents spectrum with just single-variable time series. We train a Convolutional Neural Network and we use the resulting network to approximate the complete spectrum using the time series of just one variable from the studied systems (Lorenz system and coupled Lorenz system). The results are quite stunning as all the values are well approximated with only partial data. This strategy permits to speed up the complete analysis of the systems and also to study the hyperchaotic dynamics in the coupled Lorenz system.

Paper Structure

This paper contains 16 sections, 5 equations, 11 figures, 1 table.

Figures (11)

  • Figure 1: Simple graphic representation of the architecture of a 1D CNN with three channels in the depicted convolutional layers.
  • Figure 2: 1D analysis ($\sigma=10$, $b=2.2$) of Lyapunov Exponents in the Lorenz system when training the CNN with non-random data (Huber loss value $0.091\pm0.005$). Parts with orange and purple back colors correspond to the regions where the DL technique seems to give worst results. Region shaded in blue is used in a comparison with a subsequent analysis in Figure \ref{['1DRandom_LS']}. (See the text for more details.)
  • Figure 3: 2D biparametric analysis of Lyapunov Exponents in the Lorenz system ($\sigma=10$) when training with non-random data (Huber loss value $0.115\pm0.005$). From left to right, ${\rm{LE}}_1$, ${\rm{LE}}_2$ and ${\rm{LE}}_3$. From top to bottom, results with classical techniques and with DL techniques. Lines in the top-left panel correspond to lines from where training data (light green) and validation dataset (dark green) are obtained. (See the text for more details.)
  • Figure 4: Error analysis of Lyapunov Exponents prediction in an $(r,b)$-parametric plane of the Lorenz system (see Figure \ref{['2DNonRandom_LS']}) when training with non-random data. From left to right, ${\rm{LE}}_1$, ${\rm{LE}}_2$ and ${\rm{LE}}_3$. Color code is given at the bottom. (See the text for more details.)
  • Figure 5: 1D parametric analysis ($\sigma=10$, $b=2.2$) of Lyapunov Exponents in the Lorenz system when training the CNN with random data (Huber loss value $0.055\pm0.003$). Orange and purple back colors correspond to regions where the DL technique seems to fail the most. Region shaded in blue is used to compare with previous analysis of Figure \ref{['1DNonRandom_LS']}. (See the text for more details.)
  • ...and 6 more figures