Table of Contents
Fetching ...

Concerning Uncertainty -- A Systematic Survey of Uncertainty-Aware XAI

Helena Löfström, Tuwe Löfström, Anders Hjort, Fatima Rabia Yapicioglu

Abstract

This paper surveys uncertainty-aware explainable artificial intelligence (UAXAI), examining how uncertainty is incorporated into explanatory pipelines and how such methods are evaluated. Across the literature, three recurring approaches to uncertainty quantification emerge (Bayesian, Monte Carlo, and Conformal methods), alongside distinct strategies for integrating uncertainty into explanations: assessing trustworthiness, constraining models or explanations, and explicitly communicating uncertainty. Evaluation practices remain fragmented and largely model centered, with limited attention to users and inconsistent reporting of reliability properties (e.g., calibration, coverage, explanation stability). Recent work leans towards calibration, distribution free techniques and recognizes explainer variability as a central concern. We argue that progress in UAXAI requires unified evaluation principles that link uncertainty propagation, robustness, and human decision-making, and highlight counterfactual and calibration approaches as promising avenues for aligning interpretability with reliability.

Concerning Uncertainty -- A Systematic Survey of Uncertainty-Aware XAI

Abstract

This paper surveys uncertainty-aware explainable artificial intelligence (UAXAI), examining how uncertainty is incorporated into explanatory pipelines and how such methods are evaluated. Across the literature, three recurring approaches to uncertainty quantification emerge (Bayesian, Monte Carlo, and Conformal methods), alongside distinct strategies for integrating uncertainty into explanations: assessing trustworthiness, constraining models or explanations, and explicitly communicating uncertainty. Evaluation practices remain fragmented and largely model centered, with limited attention to users and inconsistent reporting of reliability properties (e.g., calibration, coverage, explanation stability). Recent work leans towards calibration, distribution free techniques and recognizes explainer variability as a central concern. We argue that progress in UAXAI requires unified evaluation principles that link uncertainty propagation, robustness, and human decision-making, and highlight counterfactual and calibration approaches as promising avenues for aligning interpretability with reliability.

Paper Structure

This paper contains 19 sections, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Selection of papers in the study.
  • Figure 2: Illustration of sources of uncertainty across the technical stages of the decision pipeline; from data collection to explanation methods. The figure emphasizes technical sources only, user-related uncertainties are not included.
  • Figure 3: Evolution of uncertainty types and handling methods in Uncertainty-Aware XAI literature. The years 2021 and 2022 were merged (<=2022) due to limited publications. Several uncertainty handling methods were sometimes included in the same paper, accounting for the total sum being larger than 46 in the lower plot.