Table of Contents
Fetching ...

Leveraging the Value of Information in POMDP Planning

Zakariya Laouar, Qi Heng Ho, Zachary Sunberg

Abstract

Partially observable Markov decision processes (POMDPs) offer a principled formalism for planning under state and transition uncertainty. Despite advances made towards solving large POMDPs, obtaining performant policies under limited planning time remains a major challenge due to the curse of dimensionality and the curse of history. For many POMDP problems, the value of information (VOI) - the expected performance gain from reasoning about observations - varies over the belief space. We introduce a dynamic programming framework that exploits this structure by conditionally processing observations based on the value of information at each belief. Building on this framework, we propose Value of Information Monte Carlo planning (VOIMCP), a Monte Carlo Tree Search algorithm that allocates computational effort more efficiently by selectively disregarding observation information when the VOI is low, avoiding unnecessary branching of observations. We provide theoretical guarantees on the near-optimality of our VOI reasoning framework and derive non-asymptotic convergence bounds for VOIMCP. Simulation evaluations demonstrate that VOIMCP outperforms baselines on several POMDP benchmarks.

Leveraging the Value of Information in POMDP Planning

Abstract

Partially observable Markov decision processes (POMDPs) offer a principled formalism for planning under state and transition uncertainty. Despite advances made towards solving large POMDPs, obtaining performant policies under limited planning time remains a major challenge due to the curse of dimensionality and the curse of history. For many POMDP problems, the value of information (VOI) - the expected performance gain from reasoning about observations - varies over the belief space. We introduce a dynamic programming framework that exploits this structure by conditionally processing observations based on the value of information at each belief. Building on this framework, we propose Value of Information Monte Carlo planning (VOIMCP), a Monte Carlo Tree Search algorithm that allocates computational effort more efficiently by selectively disregarding observation information when the VOI is low, avoiding unnecessary branching of observations. We provide theoretical guarantees on the near-optimality of our VOI reasoning framework and derive non-asymptotic convergence bounds for VOIMCP. Simulation evaluations demonstrate that VOIMCP outperforms baselines on several POMDP benchmarks.

Paper Structure

This paper contains 31 sections, 17 theorems, 115 equations, 3 figures, 1 table, 2 algorithms.

Key Result

Theorem 1

Let $\kappa\in[0,1]$. Then, for any $b$ and $d \geq 1$, $\blacktriangleleft$$\blacktriangleleft$

Figures (3)

  • Figure 1: Comparative benchmark results presenting discounted cumulative reward against the number of tree queries over $1000$ trials. The lighter colored ribbons around the Monte Carlo mean display the $95\%$ confidence interval.
  • Figure 2: Tree growth statistics. (Top Row) Maximum tree depth vs. number of tree queries. (Bottom Row) Effective action-observation branching factor vs. number of tree queries. Statistics are computed over 100 trials.
  • Figure 3: Results for VOIMCP difference between the annealed and standard algorithms, presenting delta discounted cumulative reward over $1000$ trials. The lighter-colored ribbons around the Monte Carlo mean display the $95\%$ confidence interval.

Theorems & Definitions (29)

  • Theorem 1: Bounded Regret
  • proof
  • Proposition 1
  • Theorem 2
  • proof : Proof Sketch
  • Theorem 3
  • proof : Proof Sketch
  • Lemma 1
  • proof
  • Lemma 2
  • ...and 19 more