Table of Contents
Fetching ...

A versatile neural-network toolbox for testing Bell locality in networks

Antoine Girardin, Mohammad Massi Rashidi, Géraldine Haack, Nicolas Brunner, Alejandro Pozas-Kerstjens

Abstract

Determining whether an observed distribution of events generated in a quantum network is Bell local, i.e., if it admits an alternative realization in terms of independent local variables, is extremely challenging. Building upon arXiv:1907.10552, we develop a software solution that parameterizes local models in networks via neural networks. This allows one to leverage optimization tools available from the machine learning community in the search of network Bell nonlocality. Our solution applies to arbitrary networks, is easy to use, and includes technical improvements that significantly increase performance compared to previous implementations. We apply it to investigate nonlocality in several networks hitherto unexplored, providing insights on the corresponding quantum nonlocal sets and suggesting concrete, promising realizations of quantum nonlocal correlations.

A versatile neural-network toolbox for testing Bell locality in networks

Abstract

Determining whether an observed distribution of events generated in a quantum network is Bell local, i.e., if it admits an alternative realization in terms of independent local variables, is extremely challenging. Building upon arXiv:1907.10552, we develop a software solution that parameterizes local models in networks via neural networks. This allows one to leverage optimization tools available from the machine learning community in the search of network Bell nonlocality. Our solution applies to arbitrary networks, is easy to use, and includes technical improvements that significantly increase performance compared to previous implementations. We apply it to investigate nonlocality in several networks hitherto unexplored, providing insights on the corresponding quantum nonlocal sets and suggesting concrete, promising realizations of quantum nonlocal correlations.

Paper Structure

This paper contains 13 sections, 11 equations, 7 figures.

Figures (7)

  • Figure 1: Scheme of the different networks we investigate in this work: \ref{['fig:scheme_networks:triangle']} the triangle, \ref{['fig:scheme_networks:square']} the square, \ref{['fig:scheme_networks:pentagon']} the pentagon.
  • Figure 2: Illustration of the neural-network ansatzes. Panel \ref{['fig:triangle2']} depicts the triangle network. In panel \ref{['fig:trianglenet']} we show a neural network that reproduces the topology of the triangle: the input layer has three neurons (one per source), and then three blocks of feed-forward architecture capture the response functions of each of the parties. The topology of the network is captured by the fact that the information in the input neurons is sent, not to all, but only to the corresponding parties.
  • Figure 3: Results of the neural network for the RGB4 family of distributions. Panel \ref{['fig:results_rgb4:u']} shows the distributions obtained by distributing $\ket{\psi_+}$ states in all sources and performing the RGB4 measurement \ref{['eq:TCmeasurements']} in all parties of the triangle network, as a function of the parameter $u^2$ in the measurements. Panel \ref{['fig:results_rgb4:vis']} depicts the distances for the distributions with measurements fixed to $u^2 = 0.85$, and the sources distributing Werner states of visibility $V$. The light gray curve is a reproduction of Fig. 5 in Ref. Krivachy_nn_2020. The orange curves depict the final distances obtained with our software, while the blue curves show the best distances obtained during the training (which may be affected by sampling errors). All results are obtained using a depth of 4 and a width per party of 60. Training is performed for a maximum of $10^4$ iterations, stopping earlier if there has been no improvement over $10^3$ iterations.
  • Figure 4: Results of the neural network when scanning the family of measurements given in Eq. \ref{['eq:eBSM']} and the family of states in Eq. \ref{['eq:rotated_state_1']} in \ref{['fig:results_2d_plots:triangle']} the triangle network, \ref{['fig:results_2d_plots:square']} the square network, \ref{['fig:results_2d_plots:pentagon']} the pentagon network, and \ref{['fig:results_2d_plots:pentagon3']} the pentagon with three outputs. All results are obtained using a depth of 4 and a width per party of 60. Training is performed for a maximum of $10^4$ iterations, stopping earlier if there has been no improvement over $10^3$ iterations.
  • Figure A1: Expected error between a reference distribution, drawn randomly, and $10^3$ distributions obtained from sampling it. Results in the left column are for the KL divergence, while those in the right column are for the Euclidean distance. The top row shows the dependence on the number of samples for distributions with 64 outcomes, and the bottom row shows the dependence on the number of outcomes, with the number of samples fixed to $10^5$. In the bottom-left figure, the error bars are behind the datapoints.
  • ...and 2 more figures