Table of Contents
Fetching ...

Hyper-differential sensitivity analysis with respect to model discrepancy: Sequential optimal experimental design

Madhusudan Madhavan, Joseph Hart, Bart van Bloemen Waanders

Abstract

Large-scale optimization problems are ubiquitous in the physical sciences; yet, high-fidelity models can often be complex and computationally prohibitive for optimization. A practical alternative is to use a low-fidelity model to facilitate optimization. However, the discrepancy between the high- and low-fidelity models can lead to suboptimal solutions. To address this, we build on recent work in Hyper-Differential Sensitivity Analysis to leverage limited high-fidelity simulations to update the optimization solution. Our contributions in this article include: (i) incorporating pseudo-time continuation techniques to efficiently compute higher-accuracy optimal solution updates, and (ii) proposing a Bayesian framework for sequential data acquisition that strategically guides high-fidelity evaluations and reduces uncertainty in the model discrepancy estimation. Numerical results demonstrate that our framework delivers significant improvements to optimization solutions with only a few high-fidelity evaluations.

Hyper-differential sensitivity analysis with respect to model discrepancy: Sequential optimal experimental design

Abstract

Large-scale optimization problems are ubiquitous in the physical sciences; yet, high-fidelity models can often be complex and computationally prohibitive for optimization. A practical alternative is to use a low-fidelity model to facilitate optimization. However, the discrepancy between the high- and low-fidelity models can lead to suboptimal solutions. To address this, we build on recent work in Hyper-Differential Sensitivity Analysis to leverage limited high-fidelity simulations to update the optimization solution. Our contributions in this article include: (i) incorporating pseudo-time continuation techniques to efficiently compute higher-accuracy optimal solution updates, and (ii) proposing a Bayesian framework for sequential data acquisition that strategically guides high-fidelity evaluations and reduces uncertainty in the model discrepancy estimation. Numerical results demonstrate that our framework delivers significant improvements to optimization solutions with only a few high-fidelity evaluations.

Paper Structure

This paper contains 18 sections, 1 theorem, 53 equations, 4 figures, 1 table.

Key Result

Theorem 4.2

Let $k$ and $p$ be positive integers. The data acquisition criterion from Definition def:criterion satisfies, where $\mu_{\mathrm{pr}} = \mathcal{N}\!\left( \boldsymbol{0}, \boldsymbol{W}_{{\boldsymbol{\theta}}}^{-1} \right)$ denotes the prior distribution of ${\boldsymbol{\theta}}$, $\bar{{\boldsymbol{\theta}}}_{k+p}$ denotes the posterior mean given data inputs $\{{\boldsymbol{z}}_1, {\boldsymb

Figures (4)

  • Figure 1: Optimization objective over continuation steps (left) and number of data points (right).
  • Figure 2: Comparison of objective (left) and OED criteria (right) with random data points.
  • Figure 3: Performance of batch sequential OED with $p = 1, 2, 3$, and $6$.
  • Figure 4: High-fidelity objective over the number of high-fidelity model evaluations (left), and the optimization solutions (right).

Theorems & Definitions (3)

  • Definition 4.1
  • Theorem 4.2
  • Proof 1