Table of Contents
Fetching ...

Risk-averse optimization under distributional uncertainty with Rockafellian relaxation

Harbir Antil, Alonso J. Bustos, Sean P. Carney, Benjamín Venegas

Abstract

A framework for risk-averse optimization problems is introduced that is resilient to ambiguities in the true form of the underlying probability distribution. The focus is on problems with partial differential equations (PDEs) as constraints, although the formulation is more broadly applicable. The framework is based on combining risk measures with problem relaxation techniques, and it builds off of previous advances for risk-neutral problems. This work advances the existing theory with strengthened $Γ$-convergence results, novel existence results and first-order optimality criteria. In particular, the theoretical approach naturally accommodates infinite-dimensional probability spaces; no finite-dimensional noise assumption is needed. The framework blends aspects of both distributionally robust optimization (DRO) and distributionally optimistic optimization (DOO) approaches. The DRO aspect facilitates strong out-of-sample performance, while the DOO aspect takes care of adversarial and outlier data, as illustrated with numerical examples.

Risk-averse optimization under distributional uncertainty with Rockafellian relaxation

Abstract

A framework for risk-averse optimization problems is introduced that is resilient to ambiguities in the true form of the underlying probability distribution. The focus is on problems with partial differential equations (PDEs) as constraints, although the formulation is more broadly applicable. The framework is based on combining risk measures with problem relaxation techniques, and it builds off of previous advances for risk-neutral problems. This work advances the existing theory with strengthened -convergence results, novel existence results and first-order optimality criteria. In particular, the theoretical approach naturally accommodates infinite-dimensional probability spaces; no finite-dimensional noise assumption is needed. The framework blends aspects of both distributionally robust optimization (DRO) and distributionally optimistic optimization (DOO) approaches. The DRO aspect facilitates strong out-of-sample performance, while the DOO aspect takes care of adversarial and outlier data, as illustrated with numerical examples.

Paper Structure

This paper contains 19 sections, 16 theorems, 122 equations, 8 figures, 4 tables.

Key Result

Proposition 2.1

Let $(x_\varepsilon^\star,y_\varepsilon^\star)_\varepsilon$ be a sequence in $X\times Y$ with $x_\varepsilon^\star\in \operatorname{argmin}\phi_\varepsilon(x)$. Assume that the sequence $(\phi_\varepsilon)_{\varepsilon}$ converges to $\phi$ in one of the notions defined above, and that $(x_\varepsil

Figures (8)

  • Figure 1: Example 1: Optimal controls for the true, uncorrupted problem, as well as for the corrupted problem (dotted lines) and the corresponding Rockafellian relaxations at varying corruption levels.
  • Figure 2: Probability density functions for the random variable $\xi$ in the advection field $v(\xi)$ for the PDE constraint \ref{['eq:2d_bvp_constraint']}.
  • Figure 3: Optimal controls $z^{\ast}$ for \ref{['eq:2d_optimal_control_problem']} (without any Rockafellian relaxation) for differing values of risk-tolerance $\beta$ and corruption levels. All plots use the same legend.
  • Figure 4: For $\beta=0.1$: pointwise errors between an uncorrupted optimal control and corrupted optimal controls at 50% and 100% corruption (left and middle, respectively), as well as the pointwise error for a Rockafellian optimal control at 100% corruption (right). All plots use the same legend.
  • Figure 5: For $\beta=0.9$: pointwise errors between an uncorrupted optimal control and corrupted optimal controls at 50% and 100% corruption (left and middle, respectively), as well as the pointwise error for a Rockafellian optimal control at 100% corruption (right). All plots use the same legend.
  • ...and 3 more figures

Theorems & Definitions (38)

  • Definition 2.1: Mosco convergence
  • Definition 2.2: Weak-strong $\Gamma$-convergence
  • Proposition 2.1
  • proof
  • Definition 2.3: Rockafellian
  • Proposition 3.1
  • proof
  • Proposition 3.2
  • proof
  • Lemma 3.1
  • ...and 28 more