Table of Contents
Fetching ...

Recurrent Quantum Feature Maps for Reservoir Computing

Utkarsh Singh, Aaron Z. Goldberg, Christoph Simon, Khabat Heshami

Abstract

Reservoir computing promises a fast method for handling large amounts of temporal data. This hinges on constructing a good reservoir--a dynamical system capable of transforming inputs into a high-dimensional representation while remembering properties of earlier data. In this work, we introduce a reservoir based on recurrent quantum feature maps where a fixed quantum circuit is reused to encode both current inputs and a classical feedback signal derived from previous outputs. We evaluate the model on the Mackey-Glass time-series prediction task using our recently introduced CP feature map, and find that it achieves lower mean squared error than standard classical baselines, including echo state networks and multilayer perceptrons, while maintaining compact circuit depth and qubit requirements. We further analyze memory capacity and show that the model effectively retains temporal information, consistent with its forecasting accuracy. Finally, we study the impact of realistic noise and find that performance is robust to several noise channels but remains sensitive to two-qubit gate errors, identifying a key limitation for near-term implementations.

Recurrent Quantum Feature Maps for Reservoir Computing

Abstract

Reservoir computing promises a fast method for handling large amounts of temporal data. This hinges on constructing a good reservoir--a dynamical system capable of transforming inputs into a high-dimensional representation while remembering properties of earlier data. In this work, we introduce a reservoir based on recurrent quantum feature maps where a fixed quantum circuit is reused to encode both current inputs and a classical feedback signal derived from previous outputs. We evaluate the model on the Mackey-Glass time-series prediction task using our recently introduced CP feature map, and find that it achieves lower mean squared error than standard classical baselines, including echo state networks and multilayer perceptrons, while maintaining compact circuit depth and qubit requirements. We further analyze memory capacity and show that the model effectively retains temporal information, consistent with its forecasting accuracy. Finally, we study the impact of realistic noise and find that performance is robust to several noise channels but remains sensitive to two-qubit gate errors, identifying a key limitation for near-term implementations.

Paper Structure

This paper contains 19 sections, 14 equations, 13 figures, 2 algorithms.

Figures (13)

  • Figure 1: Schematic representation of the classical reservoir computing framework. The input vector $\mathbf{x}_t$ is projected into a high-dimensional dynamical space by a recurrent network of fixed, randomly connected internal nodes (the reservoir), characterized by weights $\mathbf{W}_{\mathrm{in}}$ and $\mathbf{W}_{\mathrm{res}}$. The resulting reservoir states are then linearly mapped to the target outputs $\mathbf{y}_j$ via a trainable readout layer with weights $\mathbf{W}_{\mathrm{out}}$. Only the output layer is optimized during training, while the reservoir dynamics remain untrained, enabling efficient learning of complex temporal patterns.
  • Figure 2: Schematic of proposed feedback-driven quantum reservoir computing model. At each timestep $t$, a windowed input sequence $[x_{t-\tau}, \dots, x_t]$ is encoded into the left half of a quantum feature map circuit $U(\cdot)$, while the right half applies the inverted circuit $U^\dagger(\cdot)$ using a feedback vector derived from the previous reservoir output. The feedback is modulated by a scaling parameter $\alpha \in [0, 1]$, with $\alpha = 1$ corresponding to full feedback and $\alpha = 0$ to input-only evolution. The quantum circuit is initialized in $|0\rangle^{\otimes n}$, and measurement outcomes are collected to produce a classical output vector, which goes to the regression model to generate the prediction $y_t$.
  • Figure 3: Model comparison on the Mackey--Glass dataset at $\tau = 17$, window size = 20, prediction horizon = 20. Quantum reservoir achieves the lowest MSE despite no hyperparameter tuning.
  • Figure 4: Predicted vs. true signal for the quantum reservoir over 100 test time steps. The reservoir captures the chaotic dynamics with high fidelity and stability.
  • Figure 5: Mean squared error (MSE) across Mackey--Glass delays $\tau = 15$ to $50$, for various prediction horizons, and two different feature maps. The CPMap consistently outperforms the ZZFeatureMap across delays and horizons.
  • ...and 8 more figures