Table of Contents
Fetching ...

Unified Uncertainties: Combining Input, Data and Model Uncertainty into a Single Formulation

Matias Valdenegro-Toro, Ivo Pascal de Jong, Marco Zullich

TL;DR

This work tackles the challenge of uncertainty quantification from inputs in addition to data and model uncertainties in neural networks. It introduces a unified framework that propagates input uncertainty ($IU$) through the network while jointly estimating data uncertainty ($AU$) and model uncertainty ($EU$). The approach offers both Taylor-based propagation and Monte Carlo sampling to transform $IU$ into output $EU$, and demonstrates that $IU$ propagation yields more stable decision boundaries under input noise and clearer separation of uncertainty sources. The results on the Two Moons dataset highlight the practical potential for sensor-rich applications where input variance is known, and point to future work on real-world data and broader noise models.

Abstract

Modelling uncertainty in Machine Learning models is essential for achieving safe and reliable predictions. Most research on uncertainty focuses on output uncertainty (predictions), but minimal attention is paid to uncertainty at inputs. We propose a method for propagating uncertainty in the inputs through a Neural Network that is simultaneously able to estimate input, data, and model uncertainty. Our results show that this propagation of input uncertainty results in a more stable decision boundary even under large amounts of input noise than comparatively simple Monte Carlo sampling. Additionally, we discuss and demonstrate that input uncertainty, when propagated through the model, results in model uncertainty at the outputs. The explicit incorporation of input uncertainty may be beneficial in situations where the amount of input uncertainty is known, though good datasets for this are still needed.

Unified Uncertainties: Combining Input, Data and Model Uncertainty into a Single Formulation

TL;DR

This work tackles the challenge of uncertainty quantification from inputs in addition to data and model uncertainties in neural networks. It introduces a unified framework that propagates input uncertainty () through the network while jointly estimating data uncertainty () and model uncertainty (). The approach offers both Taylor-based propagation and Monte Carlo sampling to transform into output , and demonstrates that propagation yields more stable decision boundaries under input noise and clearer separation of uncertainty sources. The results on the Two Moons dataset highlight the practical potential for sensor-rich applications where input variance is known, and point to future work on real-world data and broader noise models.

Abstract

Modelling uncertainty in Machine Learning models is essential for achieving safe and reliable predictions. Most research on uncertainty focuses on output uncertainty (predictions), but minimal attention is paid to uncertainty at inputs. We propose a method for propagating uncertainty in the inputs through a Neural Network that is simultaneously able to estimate input, data, and model uncertainty. Our results show that this propagation of input uncertainty results in a more stable decision boundary even under large amounts of input noise than comparatively simple Monte Carlo sampling. Additionally, we discuss and demonstrate that input uncertainty, when propagated through the model, results in model uncertainty at the outputs. The explicit incorporation of input uncertainty may be beneficial in situations where the amount of input uncertainty is known, though good datasets for this are still needed.

Paper Structure

This paper contains 10 sections, 13 equations, 5 figures.

Figures (5)

  • Figure 1: Concept of Input Uncertainty in the Two Moons dataset, increasing $\sigma$ makes classification more difficult and decision boundary unclear. Considering IU could improve model performance.
  • Figure 2: Results on the Two Moons dataset with Monte Carlo Sampling IU (Using Eq \ref{['eq:iu_mc']}). As the IU increases, the learned decision boundary loses shape and becomes noisy. Note that each value of $\sigma$ produces three plot columns for each output uncertainty.
  • Figure 3: Results on the Two Moons dataset with Propagation IU (Using Eq \ref{['eq:iu_propagation']}). As the IU increases, the predicted EU increases. The other predicted uncertainties remain roughly the same. Note that each value of $\sigma$ produces three plot columns for each output uncertainty.
  • Figure 4: Results on the Toy Regression example with Monte Carlo Sampling IU. As the IU increases, predicted IU increases accordingly, without affecting EU. Note that each value of $\sigma$ produces three plot columns for each output uncertainty type (Aleatoric, Epistemic, Input).
  • Figure 5: Results on the Toy Regression example with Propagation IU. As the IU increases, mostly predicted EU increases, while predicted IU increases only slightly. Note that each value of $\sigma$ produces three plot columns for each output uncertainty type (Aleatoric, Epistemic, Input).