Table of Contents
Fetching ...

Globalized Adversarial Regret Optimization: Robust Decisions with Uncalibrated Predictions

Jannis Kurtz, Bart P. G. van Parys

Abstract

Optimization problems routinely depend on uncertain parameters that must be predicted before a decision is made. Classical robust and regret formulations are designed to handle erroneous predictions and can provide statistical error bounds in simple settings. However, when predictions lack rigorous error bounds (as is typical of modern machine learning methods) classical robust models often yield vacuous guarantees, while regret formulations can paradoxically produce decisions that are more optimistic than even a nominal solution. We introduce Globalized Adversarial Regret Optimization (GARO), a decision framework that controls adversarial regret, defined as the gap between the worst-case cost and the oracle robust cost, uniformly across all possible uncertainty set sizes. By design, GARO delivers absolute or relative performance guarantees against an oracle with full knowledge of the prediction error, without requiring any probabilistic calibration of the uncertainty set. We show that GARO equipped with a relative rate function generalizes the classical adaptation method of Lepski to downstream decision problems. We derive exact tractable reformulations for problems with affine worst-case cost functions and polyhedral norm uncertainty sets, and provide a discretization and a constraint-generation algorithm with convergence guarantees for general settings. Finally, experiments demonstrate that GARO yields solutions with a more favorable trade-off between worst-case and mean out-of-sample performance, as well as stronger global performance guarantees.

Globalized Adversarial Regret Optimization: Robust Decisions with Uncalibrated Predictions

Abstract

Optimization problems routinely depend on uncertain parameters that must be predicted before a decision is made. Classical robust and regret formulations are designed to handle erroneous predictions and can provide statistical error bounds in simple settings. However, when predictions lack rigorous error bounds (as is typical of modern machine learning methods) classical robust models often yield vacuous guarantees, while regret formulations can paradoxically produce decisions that are more optimistic than even a nominal solution. We introduce Globalized Adversarial Regret Optimization (GARO), a decision framework that controls adversarial regret, defined as the gap between the worst-case cost and the oracle robust cost, uniformly across all possible uncertainty set sizes. By design, GARO delivers absolute or relative performance guarantees against an oracle with full knowledge of the prediction error, without requiring any probabilistic calibration of the uncertainty set. We show that GARO equipped with a relative rate function generalizes the classical adaptation method of Lepski to downstream decision problems. We derive exact tractable reformulations for problems with affine worst-case cost functions and polyhedral norm uncertainty sets, and provide a discretization and a constraint-generation algorithm with convergence guarantees for general settings. Finally, experiments demonstrate that GARO yields solutions with a more favorable trade-off between worst-case and mean out-of-sample performance, as well as stronger global performance guarantees.

Paper Structure

This paper contains 34 sections, 13 theorems, 128 equations, 15 figures, 1 algorithm.

Key Result

Theorem 1

Let $P_\infty$ be a compact convex set admitting a continuous strictly convex function $\psi : P_\infty \to \mathrm{R}_+$. Let $f(x,p)$ be jointly continuous, convex in $x$, and concave in $p$ for every $x \in X$, with $X$ compact convex. Let $p \mapsto d(p_0, p)$ be lower semicontinuous and convex.

Figures (15)

  • Figure 1: Facility location instance: customer locations $\Xi$ (red marks) with predicted distribution $\mathbb P_0$ (red arrows). The path of robust hub locations $\{\mu_{rob}(\gamma)\}_{\gamma\geq 0}$ (gray) moves from the nominal location $\mu_{nom}$ (orange) to $\mathop{\mathrm{ctr}}\nolimits(\Xi)$ (black) as $\gamma_0$ grows. The set $M(\gamma_0)$ of attainable means under $\mathcal{P}_{\gamma_0}$ (blue shaded polytope) and the regret-optimal location $\mu_{reg}=\mathop{\mathrm{ctr}}\nolimits(M(\gamma_0))$ (blue dot) are also shown, together with the satisficing location $\mu_{sat}$ (yellow). The GARO solution $\mu_{garo}$ (green star) is discussed in Section \ref{['sec:robust_decisions_wild_predictions']}.
  • Figure 2: Adversarial regret $A(\cdot,\gamma)$ as a function of the Wasserstein perturbation level $\gamma$ for the five hub locations shown in Figure \ref{['fig:weber-instance']}. The regret-optimal location $\mu_{reg}$ (blue) is so optimistic that its adversarial regret exceeds that of the nominal location $\mu_{nom}$ (orange) for all $\gamma\geq 0$, confirming \ref{['eq:optimistic']}. The robust location $\mu_{rob}$ (black) achieves zero adversarial regret at $\gamma_0$. The GARO location $\mu_{garo}$ (green) maintains uniformly small adversarial regret across all $\gamma$, bounded by the guarantee $\alpha_{garo}$ (dotted green line).
  • Figure 3: Out-of-sample performance of RO for the minimum knapsack problem with $n=50$ for Gaussian data. Mean vs. worst-case (left) and mean vs. $90\%$-quantile (right).
  • Figure 4: Performance guarantees of RO$(\theta)$ in the minimum knapsack problem with $n=50$ for Gaussian data. Its guarantee is all-or-nothing: it is constant for $d(p_0, p) \leq \theta\gamma_{0.99}$ and is vacuous beyond this threshold.
  • Figure 5: Boxplots of the out-of-sample objective values for the minimum knapsack problem with $n=50$ for Gaussian data. The diamonds denote the mean value.
  • ...and 10 more figures

Theorems & Definitions (36)

  • Example 1
  • Example 2
  • Example 3
  • Theorem 1
  • Example 4
  • Theorem 2
  • proof
  • Example 5
  • Lemma 1
  • proof
  • ...and 26 more