Table of Contents
Fetching ...

Visual Analysis of Prediction Uncertainty in Neural Networks for Deep Image Synthesis

Soumya Dutta, Faheem Nizar, Ahmad Amaan, Ayan Acharya

TL;DR

This contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods and then interactively compared and contrasted for deep image synthesis tasks and suggests that uncertainty-aware deep visualization models generate illustrations of informative and superior quality and diversity.

Abstract

Ubiquitous applications of Deep neural networks (DNNs) in different artificial intelligence systems have led to their adoption in solving challenging visualization problems in recent years. While sophisticated DNNs offer an impressive generalization, it is imperative to comprehend the quality, confidence, robustness, and uncertainty associated with their prediction. A thorough understanding of these quantities produces actionable insights that help application scientists make informed decisions. Unfortunately, the intrinsic design principles of the DNNs cannot beget prediction uncertainty, necessitating separate formulations for robust uncertainty-aware models for diverse visualization applications. To that end, this contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods and then interactively compared and contrasted for deep image synthesis tasks. Our inspection suggests that uncertainty-aware deep visualization models generate illustrations of informative and superior quality and diversity. Furthermore, prediction uncertainty improves the robustness and interpretability of deep visualization models, making them practical and convenient for various scientific domains that thrive on visual analyses.

Visual Analysis of Prediction Uncertainty in Neural Networks for Deep Image Synthesis

TL;DR

This contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods and then interactively compared and contrasted for deep image synthesis tasks and suggests that uncertainty-aware deep visualization models generate illustrations of informative and superior quality and diversity.

Abstract

Ubiquitous applications of Deep neural networks (DNNs) in different artificial intelligence systems have led to their adoption in solving challenging visualization problems in recent years. While sophisticated DNNs offer an impressive generalization, it is imperative to comprehend the quality, confidence, robustness, and uncertainty associated with their prediction. A thorough understanding of these quantities produces actionable insights that help application scientists make informed decisions. Unfortunately, the intrinsic design principles of the DNNs cannot beget prediction uncertainty, necessitating separate formulations for robust uncertainty-aware models for diverse visualization applications. To that end, this contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods and then interactively compared and contrasted for deep image synthesis tasks. Our inspection suggests that uncertainty-aware deep visualization models generate illustrations of informative and superior quality and diversity. Furthermore, prediction uncertainty improves the robustness and interpretability of deep visualization models, making them practical and convenient for various scientific domains that thrive on visual analyses.

Paper Structure

This paper contains 27 sections, 2 equations, 14 figures, 3 tables.

Figures (14)

  • Figure 1: Fig. \ref{['demo_model']} shows the DNN architecture used for demonstration. Fig. \ref{['training']} and Fig. \ref{['testing']} show the synthetic training and test data used to demonstrate the MC-Dropout-based uncertainty estimation technique. The synthetic data is generated using the function $f(x) = x sin(x)$ with an added Gaussian noise ($\epsilon\sim\mathcal{N}(0, 0.1)$).
  • Figure 2: These plots illustrate the estimated uncertainty for Ensemble and MCDropout methods. The blue dotted line in Fig. \ref{['ensemble_example']} and \ref{['mcdropout_example']} show the mean prediction, and the light red envelope shows the prediction uncertainty.
  • Figure 3: Architecture of our deep visualization model with the inset on the right showing the structure of a residual block.
  • Figure 4: Uncertainty, Error, and Error standard deviation estimation for both MC-Dropout and Ensemble method. RGB Channel-wise uncertainty and error quantities are computed for both methods.
  • Figure 5: Interactive Uncertainty analysis interface showing results for Combustion data. The analysis interface allows the users to effectively compare and contrast the uncertainty and error estimates produced by both MC-Dropout and Ensemble methods, where (A) shows PCP, (B) shows uncertainty and error heatmaps, and (C) shows the image view panel.
  • ...and 9 more figures