Table of Contents
Fetching ...

Online convex optimization in the bandit setting: gradient descent without a gradient

Abraham D. Flaxman, Adam Tauman Kalai, H. Brendan McMahan

TL;DR

This paper extends Zinkevich's online convex optimization to the bandit setting by using a simple one-point gradient estimator derived from a single function evaluation per round. The estimator corresponds to the gradient of a smoothed version of the objective, enabling gradient-descent–style updates under adversarial, bandit feedback with provable regret bounds. The main results show an expected regret of O(n^{5/6}) in the basic bandit setting, improved to O(n^{3/4}) under Lipschitz assumptions, and further enhanced via isotropic reshaping to reduce geometric dependence, yielding bounds that scale with √(C L R) plus constants. The work connects to a broad literature on online optimization, stochastic approximation, and bandit methods, and outlines potential extensions to adaptive adversaries, adaptive step sizes, and unconstrained domains.

Abstract

We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).

Online convex optimization in the bandit setting: gradient descent without a gradient

TL;DR

This paper extends Zinkevich's online convex optimization to the bandit setting by using a simple one-point gradient estimator derived from a single function evaluation per round. The estimator corresponds to the gradient of a smoothed version of the objective, enabling gradient-descent–style updates under adversarial, bandit feedback with provable regret bounds. The main results show an expected regret of O(n^{5/6}) in the basic bandit setting, improved to O(n^{3/4}) under Lipschitz assumptions, and further enhanced via isotropic reshaping to reduce geometric dependence, yielding bounds that scale with √(C L R) plus constants. The work connects to a broad literature on online optimization, stochastic approximation, and bandit methods, and outlines potential extensions to adaptive adversaries, adaptive step sizes, and unconstrained domains.

Abstract

We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).

Paper Structure

This paper contains 10 sections, 5 theorems, 45 equations, 1 figure.

Key Result

Lemma 1

Fix $\delta>0$, over random unit vectors $u$,

Figures (1)

  • Figure 1: Bandit gradient descent algorithm

Theorems & Definitions (14)

  • Lemma 1
  • proof
  • Lemma 2
  • proof
  • proof
  • proof
  • proof
  • Theorem 1
  • proof
  • Theorem 2
  • ...and 4 more