Table of Contents
Fetching ...

Neural Network Parametrization of Deep-Inelastic Structure Functions

Stefano Forte, Lluis Garrido, Jose I. Latorre, Andrea Piccione

TL;DR

This work addresses the bias and error-propagation challenges in determining deep-inelastic structure functions from DIS data by constructing a bias-free probability density over F_2(x,Q^2) using Monte Carlo replicas and an ensemble of neural networks. The authors generate replicas of NMC and BCDMS data, train thousands of neural nets on each replica, and infer the full distribution of F_2, including point-to-point correlations and derived observables such as the nonsinglet F_2^{NS}. They demonstrate that the neural-ensemble reproduces central values, reduces uncertainties where data are informative, and preserves experimental systematics, while providing a practical, bias-free interpolation across the data region. The results are made publicly available and are poised to enhance precision QCD studies, including unbiased determinations of moments and alpha_s via truncated-moment techniques.

Abstract

We construct a parametrization of deep-inelastic structure functions which retains information on experimental errors and correlations, and which does not introduce any theoretical bias while interpolating between existing data points. We generate a Monte Carlo sample of pseudo-data configurations and we train an ensemble of neural networks on them. This effectively provides us with a probability measure in the space of structure functions, within the whole kinematic region where data are available. This measure can then be used to determine the value of the structure function, its error, point-to-point correlations and generally the value and uncertainty of any function of the structure function itself. We apply this technique to the determination of the structure function F_2 of the proton and deuteron, and a precision determination of the isotriplet combination F_2[p-d]. We discuss in detail these results, check their stability and accuracy, and make them available in various formats for applications.

Neural Network Parametrization of Deep-Inelastic Structure Functions

TL;DR

This work addresses the bias and error-propagation challenges in determining deep-inelastic structure functions from DIS data by constructing a bias-free probability density over F_2(x,Q^2) using Monte Carlo replicas and an ensemble of neural networks. The authors generate replicas of NMC and BCDMS data, train thousands of neural nets on each replica, and infer the full distribution of F_2, including point-to-point correlations and derived observables such as the nonsinglet F_2^{NS}. They demonstrate that the neural-ensemble reproduces central values, reduces uncertainties where data are informative, and preserves experimental systematics, while providing a practical, bias-free interpolation across the data region. The results are made publicly available and are poised to enhance precision QCD studies, including unbiased determinations of moments and alpha_s via truncated-moment techniques.

Abstract

We construct a parametrization of deep-inelastic structure functions which retains information on experimental errors and correlations, and which does not introduce any theoretical bias while interpolating between existing data points. We generate a Monte Carlo sample of pseudo-data configurations and we train an ensemble of neural networks on them. This effectively provides us with a probability measure in the space of structure functions, within the whole kinematic region where data are available. This measure can then be used to determine the value of the structure function, its error, point-to-point correlations and generally the value and uncertainty of any function of the structure function itself. We apply this technique to the determination of the structure function F_2 of the proton and deuteron, and a precision determination of the isotriplet combination F_2[p-d]. We discuss in detail these results, check their stability and accuracy, and make them available in various formats for applications.

Paper Structure

This paper contains 21 sections, 45 equations, 22 figures, 6 tables.

Figures (22)

  • Figure 1: NMC and BCDMS kinematic range.
  • Figure 2: A three-layer feed-forward neural network consisting of input, hidden and output layers.
  • Figure 3: Flow chart for the construction of the parametrization of structure functions
  • Figure 4: $\langle F^{(art)}_i\rangle_{rep}$ vs. $\bar{F}^{(exp)}_i$, $\langle \sigma^{(art)}_i\rangle_{rep}$ vs. $\bar{\sigma}^{(exp)}_i$ and $\langle\rho_{ij}^{(art)}\rangle_{rep}$ vs. $\rho_{ij}^{(exp)}$ with $N_{rep}=$ 10 (red), 100 (green) and 1000 (blue) replicas.
  • Figure 5: Fit of the nonsinglet structure function $F_2^p-F_2^d$ to a subset of BCDMS data points for increasing training lengths: insufficient training (left); < normal training (middle); overlearning (right). The variable in abscissa is an arbitrary point number.
  • ...and 17 more figures