Table of Contents
Fetching ...

Softmax gradient policy for variance minimization and risk-averse multi armed bandits

Gabriel Turinici

Abstract

Algorithms for the Multi-Armed Bandit (MAB) problem play a central role in sequential decision-making and have been extensively explored both theoretically and numerically. While most classical approaches aim to identify the arm with the highest expected reward, we focus on a risk-aware setting where the goal is to select the arm with the lowest variance, favoring stability over potentially high but uncertain returns. To model the decision process, we consider a softmax parameterization of the policy; we propose a new algorithm to select the minimal variance (or minimal risk) arm and prove its convergence under natural conditions. The algorithm constructs an unbiased estimate of the objective by using two independent draws from the current's arm distribution. We provide numerical experiments that illustrate the practical behavior of these algorithms and offer guidance on implementation choices. The setting also covers general risk-aware problems where there is a trade-off between maximizing the average reward and minimizing its variance.

Softmax gradient policy for variance minimization and risk-averse multi armed bandits

Abstract

Algorithms for the Multi-Armed Bandit (MAB) problem play a central role in sequential decision-making and have been extensively explored both theoretically and numerically. While most classical approaches aim to identify the arm with the highest expected reward, we focus on a risk-aware setting where the goal is to select the arm with the lowest variance, favoring stability over potentially high but uncertain returns. To model the decision process, we consider a softmax parameterization of the policy; we propose a new algorithm to select the minimal variance (or minimal risk) arm and prove its convergence under natural conditions. The algorithm constructs an unbiased estimate of the objective by using two independent draws from the current's arm distribution. We provide numerical experiments that illustrate the practical behavior of these algorithms and offer guidance on implementation choices. The setting also covers general risk-aware problems where there is a trade-off between maximizing the average reward and minimizing its variance.

Paper Structure

This paper contains 13 sections, 3 theorems, 23 equations, 4 figures, 2 algorithms.

Key Result

lemma 1

For both alg:variance-mabalg:risk-aware-mab we have: Moreover for alg:variance-mab while for alg:risk-aware-mab which means that $g_t$ in update eq:update_Ht_pg is indeed an un-biased estimation of the true gradient of $\nabla_H \mathcal{L}(H)$ ( alg:variance-mab) or $\nabla_H \mathcal{L}_r(H)$ ( alg:risk-aware-mab). $\blacktriangleleft$$\blacktriangleleft$

Figures (4)

  • Figure 1: The toy example in \ref{['sec:toy_2arms']} for $k=2$ arms. The average regret (left plot) and average optimal action frequency (right plot); we also display the 95% Confidence interval for each. Run details: $H_1=(0,0)$ (uniform), learning rate $\rho_t=0.5$, $200$ time steps.
  • Figure 2: The toy example in \ref{['sec:toy_2arms']} for $k=10$ arms. The average regret (left plot) and average optimal action frequency (right plot); we also display the 95% Confidence interval for each. Run details: $H_1=(0,0)$ (uniform), learning rate $\rho_t=0.05$, $300$ time steps.
  • Figure 3: Test case in \ref{['sec:difficult10']}, the average regret and its 95% Confidence interval; $H_1=(0,...,0)$ (uniform) and learning rate $\rho_t=0.1$.
  • Figure 4: Test case in \ref{['sec:difficult10']}, the average number of optimal arm selection and its 95% Confidence interval; $H_1=(0,...,0)$ (uniform) and learning rate $\rho_t=0.1$.

Theorems & Definitions (7)

  • lemma 1
  • proof
  • proposition 1
  • proof
  • remark 1
  • proposition 2
  • proof