Table of Contents
Fetching ...

Bandits with Preference Feedback: A Stackelberg Game Perspective

Barna Pásztor, Parnian Kassraie, Andreas Krause

TL;DR

This work tackles bandit optimization with preference feedback over continuous, kernelized domains by introducing MAXMINLCB, a zero-sum Stackelberg acquisition that jointly selects action pairs using a kernelized, logistic-style confidence framework. By proving an equivalence between the dueling preference loss and a logistic loss under a dueling kernel, it derives anytime-valid confidence sets for the latent utility difference and builds a robust, information-theoretic regret bound $R^{\mathrm{D}}(T)=\mathcal{O}(\gamma_T^{\mathrm{D}}\sqrt{T})$. The method integrates a principled action-pair selection rule (Leader maximizes LCB, Follower responds) that balancing exploration and exploitation via a game-theoretic lens, and it demonstrates superior performance on diverse benchmarks and a real-world Yelp dataset. The results generalize beyond simple linear or finite domains, offering a scalable, principled framework for human-in-the-loop optimization and potential extensions to RLHF-like settings and welfare mechanisms. Overall, the paper advances theory and practice of kernelized preference-based bandits with robust confidence guarantees and practical efficacy.

Abstract

Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for fine-tuning large language models. The problem is well understood in simplified settings with linear target functions or over finite small domains that limit practical interest. Taking the next step, we consider infinite domains and nonlinear (kernelized) rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm. We propose MAXMINLCB, which emulates this trade-off as a zero-sum Stackelberg game, and chooses action pairs that are informative and yield favorable rewards. MAXMINLCB consistently outperforms existing algorithms and satisfies an anytime-valid rate-optimal regret guarantee. This is due to our novel preference-based confidence sequences for kernelized logistic estimators.

Bandits with Preference Feedback: A Stackelberg Game Perspective

TL;DR

This work tackles bandit optimization with preference feedback over continuous, kernelized domains by introducing MAXMINLCB, a zero-sum Stackelberg acquisition that jointly selects action pairs using a kernelized, logistic-style confidence framework. By proving an equivalence between the dueling preference loss and a logistic loss under a dueling kernel, it derives anytime-valid confidence sets for the latent utility difference and builds a robust, information-theoretic regret bound . The method integrates a principled action-pair selection rule (Leader maximizes LCB, Follower responds) that balancing exploration and exploitation via a game-theoretic lens, and it demonstrates superior performance on diverse benchmarks and a real-world Yelp dataset. The results generalize beyond simple linear or finite domains, offering a scalable, principled framework for human-in-the-loop optimization and potential extensions to RLHF-like settings and welfare mechanisms. Overall, the paper advances theory and practice of kernelized preference-based bandits with robust confidence guarantees and practical efficacy.

Abstract

Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for fine-tuning large language models. The problem is well understood in simplified settings with linear target functions or over finite small domains that limit practical interest. Taking the next step, we consider infinite domains and nonlinear (kernelized) rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm. We propose MAXMINLCB, which emulates this trade-off as a zero-sum Stackelberg game, and chooses action pairs that are informative and yield favorable rewards. MAXMINLCB consistently outperforms existing algorithms and satisfies an anytime-valid rate-optimal regret guarantee. This is due to our novel preference-based confidence sequences for kernelized logistic estimators.

Paper Structure

This paper contains 23 sections, 18 theorems, 100 equations, 9 figures, 2 tables, 7 algorithms.

Key Result

Proposition 1

The regularized negative log-likelihood loss of eq:logistic_loss has a unique minimizer ${f_t}$, which takes the form ${f_t}(\cdot) = \sum_{\tau=1}^t \alpha_\tau k(\cdot, {\bm{x}}_\tau)$ where $(\alpha_1, \dots \alpha_t) \mathrel{\hbox{$\!\!=\!\!\hbox{$\mathop{:}$}\!\!$}} {\bm{\alpha}}_t \in {\mathb with ${\bm{k}}_t({\bm{x}}) = (k({\bm{x}}_1, {\bm{x}}), \dots, k({\bm{x}}_t, {\bm{x}})) \in {\mathbb

Figures (9)

  • Figure 1: Regret of learning the Ackley function with logistic and preference feedback. (a) Same UCB algorithms, each using a different confidence set. LGP-UCB performs best, showcasing the power of \ref{['thm:func_CI']}. (b): Algorithms with different acquisition functions, all using our confidence sets. MaxMinLCB is more sample-efficient.
  • Figure 2: LGP-UCB is more sample-efficient when making restaurant recommendations based on Yelp open dataset with preference feedback. All baselines use the confidence sets of Cor. \ref{['cor:dueling_CI']}.
  • Figure 3: Confidence sets for an illustrative problem with $3$ arms at a single time step. Annotated arrows highlight the action selection for three common approaches. MaxMinLCB selects the action pair $(1,2)$ with the least regret. Upper-bound maximization ($\textsc{Optimism}$) and information maximization ($\textsc{Max Info}$) choose sub-optimal arms.
  • Figure 4: Regret with Branin utility function with logistic (left) and preference (right) feedback.
  • Figure 5: Top to bottom Regret for Eggholder, Hölder, Matyas Michalewicz, Rosenbrock functions, with logistic (left) and preference (right) feedback.
  • ...and 4 more figures

Theorems & Definitions (33)

  • Proposition 1: Logistic Representer Theorem
  • Theorem 2: Kernelized Logistic Confidence Sequences
  • Corollary 3
  • Proposition 4
  • Corollary 5: Kernelized Preference-based Confidence Sequences
  • Theorem 6
  • Lemma 7: Corollary 1 whitehouse2023improved
  • Lemma 8: Gradient Space Confidence Bounds
  • proof : Proof of \ref{['lem:grad_CI']}
  • Lemma 9
  • ...and 23 more