Table of Contents
Fetching ...

Boosted Distributional Reinforcement Learning: Analysis and Healthcare Applications

Zequn Chen, Wesley J. Marrero

Abstract

Researchers and practitioners are increasingly considering reinforcement learning to optimize decisions in complex domains like robotics and healthcare. To date, these efforts have largely utilized expectation-based learning. However, relying on expectation-focused objectives may be insufficient for making consistent decisions in highly uncertain situations involving multiple heterogeneous groups. While distributional reinforcement learning algorithms have been introduced to model the full distributions of outcomes, they can yield large discrepancies in realized benefits among comparable agents. This challenge is particularly acute in healthcare settings, where physicians (controllers) must manage multiple patients (subordinate agents) with uncertain disease progression and heterogeneous treatment responses. We propose a Boosted Distributional Reinforcement Learning (BDRL) algorithm that optimizes agent-specific outcome distributions while enforcing comparability among similar agents and analyze its convergence. To further stabilize learning, we incorporate a post-update projection step formulated as a constrained convex optimization problem, which efficiently aligns individual outcomes with a high-performing reference within a specified tolerance. We apply our algorithm to manage hypertension in a large subset of the US adult population by categorizing individuals into cardiovascular disease risk groups. Our approach modifies treatment plans for median and vulnerable patients by mimicking the behavior of high-performing references in each risk group. Furthermore, we find that BDRL improves the number and consistency of quality-adjusted life years compared with reinforcement learning baselines.

Boosted Distributional Reinforcement Learning: Analysis and Healthcare Applications

Abstract

Researchers and practitioners are increasingly considering reinforcement learning to optimize decisions in complex domains like robotics and healthcare. To date, these efforts have largely utilized expectation-based learning. However, relying on expectation-focused objectives may be insufficient for making consistent decisions in highly uncertain situations involving multiple heterogeneous groups. While distributional reinforcement learning algorithms have been introduced to model the full distributions of outcomes, they can yield large discrepancies in realized benefits among comparable agents. This challenge is particularly acute in healthcare settings, where physicians (controllers) must manage multiple patients (subordinate agents) with uncertain disease progression and heterogeneous treatment responses. We propose a Boosted Distributional Reinforcement Learning (BDRL) algorithm that optimizes agent-specific outcome distributions while enforcing comparability among similar agents and analyze its convergence. To further stabilize learning, we incorporate a post-update projection step formulated as a constrained convex optimization problem, which efficiently aligns individual outcomes with a high-performing reference within a specified tolerance. We apply our algorithm to manage hypertension in a large subset of the US adult population by categorizing individuals into cardiovascular disease risk groups. Our approach modifies treatment plans for median and vulnerable patients by mimicking the behavior of high-performing references in each risk group. Furthermore, we find that BDRL improves the number and consistency of quality-adjusted life years compared with reinforcement learning baselines.

Paper Structure

This paper contains 43 sections, 6 theorems, 49 equations, 8 figures, 3 tables, 2 algorithms.

Key Result

Theorem 1

The distance between the updated distributional estimate $Z_i^{(t)}$ and the optimal group estimate $Z_{g_k}^{*}$ is nonexpansive under the mixture operation. Moreover, the distance at the subsequent step satisfies a $d^{(t+1)} \leq \max \{d^{(t)}, \epsilon \}$ update, where $\epsilon>0$ is a predef

Figures (8)

  • Figure 1: Overview of the algorithm.
  • Figure 1: Selecting the number of centroids ($k$).
  • Figure 2: Overview of the simulation framework.
  • Figure 2: Finding the optimal number of batches ($B$).
  • Figure 3: Probability of action selection during training for the most vulnerable patient and the most resilient patient across risk clusters before and after boosting. The treatment choice ranging from 0 to 20 represents the index of each of the 21 actions considered in the .
  • ...and 3 more figures

Theorems & Definitions (11)

  • Theorem 1: The Contraction of Mixtures
  • Proposition 1: $W_2$ Distance Convergence
  • Theorem 2: Convex Quadratic Transformation
  • proof
  • Theorem 1: The Contraction of Mixtures
  • proof
  • Proposition 1: $W_2$ Distance Convergence
  • proof
  • Theorem 2: Convex Quadratic Transformation
  • proof
  • ...and 1 more