Table of Contents
Fetching ...

Restless Bandits with Individual Penalty Constraints: A New Near-Optimal Index Policy and How to Learn It

Nida Zamir, I-Hong Hou

Abstract

This paper investigates the Restless Multi-Armed Bandit (RMAB) framework under individual penalty constraints to address resource allocation challenges in dynamic wireless networked environments. Unlike conventional RMAB models, our model allows each user (arm) to have distinct and stringent performance constraints, such as energy limits, activation limits, or age of information minimums, enabling the capture of diverse objectives including fairness and efficiency. To find the optimal resource allocation policy, we propose a new Penalty-Optimal Whittle (POW) index policy. The POW index of an user only depends on the user's transition kernel and penalty constraints, and remains invariable to system-wide features such as the number of users present and the amount of resource available. This makes it computationally tractable to calculate the POW Indices offline without any need for online adaptation. Moreover, we theoretically prove that the POW index policy is asymptotically optimal while satisfying all individual penalty constraints. We also introduce a deep reinforcement learning algorithm to efficiently learn the POW index on the fly. Simulation results across various applications and system configurations further demonstrate that the POW index policy not only has near-optimal performance but also significantly outperforms other existing policies.

Restless Bandits with Individual Penalty Constraints: A New Near-Optimal Index Policy and How to Learn It

Abstract

This paper investigates the Restless Multi-Armed Bandit (RMAB) framework under individual penalty constraints to address resource allocation challenges in dynamic wireless networked environments. Unlike conventional RMAB models, our model allows each user (arm) to have distinct and stringent performance constraints, such as energy limits, activation limits, or age of information minimums, enabling the capture of diverse objectives including fairness and efficiency. To find the optimal resource allocation policy, we propose a new Penalty-Optimal Whittle (POW) index policy. The POW index of an user only depends on the user's transition kernel and penalty constraints, and remains invariable to system-wide features such as the number of users present and the amount of resource available. This makes it computationally tractable to calculate the POW Indices offline without any need for online adaptation. Moreover, we theoretically prove that the POW index policy is asymptotically optimal while satisfying all individual penalty constraints. We also introduce a deep reinforcement learning algorithm to efficiently learn the POW index on the fly. Simulation results across various applications and system configurations further demonstrate that the POW index policy not only has near-optimal performance but also significantly outperforms other existing policies.

Paper Structure

This paper contains 19 sections, 2 theorems, 48 equations, 6 figures, 3 tables, 1 algorithm.

Key Result

Theorem 1

If $\pi^{Rel}$ exists and has a global attractor $\vec{y}^{\text{Rel}}$, $\pi^{Ind}$ has a global attractor $\vec{y}^{\text{Ind}}$, and $\vec{y}^{\text{Ind}}$ has a strict index separator $\lambda^{Ind}$, then $\vec{y}^{\text{Rel}}=\vec{y}^{\text{Ind}}$ and $\pi^{Rel}(\vec{y}^{\text{Ind}})=\pi^{Ind}

Figures (6)

  • Figure 1: Average reward and average constraint violation for throughput maximization with activation constraints.
  • Figure 2: Average reward and average constraint violation during training for throughput maximization with activation constraints.
  • Figure 3: Average reward and average constraint violation for remote sensing.
  • Figure 4: Average reward and average constraint violation during DeepPOW training for remote sensing.
  • Figure 5: Average reward and average constraint violation for remote sensing for throughput maximization with service regularity constraints.
  • ...and 1 more figures

Theorems & Definitions (8)

  • Definition 1
  • Definition 2
  • Definition 3: Global attractor
  • Definition 4: Strict index separator
  • Theorem 1
  • proof
  • Theorem 2
  • proof