Table of Contents
Fetching ...

Finite-Time Analysis of Q-Value Iteration for General-Sum Stackelberg Games

Narim Jeong, Donghwan Lee

Abstract

Reinforcement learning has been successful both empirically and theoretically in single-agent settings, but extending these results to multi-agent reinforcement learning in general-sum Markov games remains challenging. This paper studies the convergence of Stackelberg Q-value iteration in two-player general-sum Markov games from a control-theoretic perspective. We introduce a relaxed policy condition tailored to the Stackelberg setting and model the learning dynamics as a switching system. By constructing upper and lower comparison systems, we establish finite-time error bounds for the Q-functions and characterize their convergence properties. Our results provide a novel control-theoretic perspective on Stackelberg learning. Moreover, to the best of the authors' knowledge, this paper offers the first finite-time convergence guarantees for Q-value iteration in general-sum Markov games under Stackelberg interactions.

Finite-Time Analysis of Q-Value Iteration for General-Sum Stackelberg Games

Abstract

Reinforcement learning has been successful both empirically and theoretically in single-agent settings, but extending these results to multi-agent reinforcement learning in general-sum Markov games remains challenging. This paper studies the convergence of Stackelberg Q-value iteration in two-player general-sum Markov games from a control-theoretic perspective. We introduce a relaxed policy condition tailored to the Stackelberg setting and model the learning dynamics as a switching system. By constructing upper and lower comparison systems, we establish finite-time error bounds for the Q-functions and characterize their convergence properties. Our results provide a novel control-theoretic perspective on Stackelberg learning. Moreover, to the best of the authors' knowledge, this paper offers the first finite-time convergence guarantees for Q-value iteration in general-sum Markov games under Stackelberg interactions.

Paper Structure

This paper contains 8 sections, 38 equations, 3 figures.

Figures (3)

  • Figure E1: Illustration of epsilon values satisfying \ref{['assumption2']}.
  • Figure E2: Illustration of Q-function error of the leader and its corresponding bounds from \ref{['thm1']}.
  • Figure E3: Illustration of Q-function error of the follower and its corresponding bounds from \ref{['thm1']}.

Theorems & Definitions (6)

  • proof
  • proof
  • proof
  • proof
  • proof
  • proof