Table of Contents
Fetching ...

Optimistic Actor-Critic with Parametric Policies for Linear Markov Decision Processes

Max Qiushi Lin, Reza Asad, Kevin Tan, Haque Ishfaq, Csaba Szepesvari, Sharan Vaswani

Abstract

Although actor-critic methods have been successful in practice, their theoretical analyses have several limitations. Specifically, existing theoretical work either sidesteps the exploration problem by making strong assumptions or analyzes impractical methods with complicated algorithmic modifications. Moreover, the actor-critic methods analyzed for linear MDPs often employ natural policy gradient and construct "implicit" policies without explicit parameterization. Such policies are computationally expensive to sample from, making the environment interactions inefficient. To that end, we focus on the finite-horizon linear MDPs and propose an optimistic actor-critic framework that uses parametric log-linear policies. In particular, we introduce a tractable $\textit{logit-matching}$ regression objective for the actor. For the critic, we use approximate Thompson sampling via Langevin Monte Carlo to obtain optimistic value estimates. We prove that the resulting algorithm achieves $\widetilde{\mathcal{O}}(ε^{-4})$ and $\widetilde{\mathcal{O}}(ε^{-2})$ sample complexity in the on-policy and off-policy setting, respectively. Our results match prior theoretical work in achieving the state-of-the-art sample complexity, while our algorithm is more aligned with practice.

Optimistic Actor-Critic with Parametric Policies for Linear Markov Decision Processes

Abstract

Although actor-critic methods have been successful in practice, their theoretical analyses have several limitations. Specifically, existing theoretical work either sidesteps the exploration problem by making strong assumptions or analyzes impractical methods with complicated algorithmic modifications. Moreover, the actor-critic methods analyzed for linear MDPs often employ natural policy gradient and construct "implicit" policies without explicit parameterization. Such policies are computationally expensive to sample from, making the environment interactions inefficient. To that end, we focus on the finite-horizon linear MDPs and propose an optimistic actor-critic framework that uses parametric log-linear policies. In particular, we introduce a tractable regression objective for the actor. For the critic, we use approximate Thompson sampling via Langevin Monte Carlo to obtain optimistic value estimates. We prove that the resulting algorithm achieves and sample complexity in the on-policy and off-policy setting, respectively. Our results match prior theoretical work in achieving the state-of-the-art sample complexity, while our algorithm is more aligned with practice.

Paper Structure

This paper contains 84 sections, 37 theorems, 176 equations, 9 figures, 5 tables, 4 algorithms.

Key Result

Theorem 1

Given a sequence of linear functions $\{\langle*\rangle{ p^t, g^t }\}_{t \in [T]}$ for a sequence of vectors $\{g^t\}_{t \in [T]}$ where for any $t \in [T]$, $p^t \in \Delta({\mathcal{A}})$, $g^t \in {\mathbb{R}}^{\lvert{\mathcal{A}}\rvert}$, and $\|*\|{g^t}_\infty \leq H$. Consider $p^{t \in [T]}$ Let ${\epsilon}^t \coloneq {\rm{KL}} (u \; \| \; p^{t+1}) - {\rm{KL}} (u \; \| \; p^{t+1/2})$ be th

Figures (9)

  • Figure 1: Comparison of LMC-NPG-EXP (our proposed algorithm), LMC-NPG-IMP (memory-intensive variant), and ${\texttt{LMC}}$ (value-based baseline) in the Random MDP.
  • Figure 2: Example of the Deep Sea environment from osband2019behaviour.
  • Figure 3: Comparison of LMC-NPG-EXP (our proposed framework), LMC-NPG-IMP (memory-intensive variant), and ${\texttt{LMC}}$ (value-based baseline) in the Deep Sea environment.
  • Figure 4: Ablation of the exploration mechanism for LMc-NPG-EXP.
  • Figure 5: Effect of feature dimension $d$ in Deep Sea.
  • ...and 4 more figures

Theorems & Definitions (41)

  • Definition 1: Linear MDP
  • Theorem 1
  • Lemma 1
  • Remark 1
  • Definition 2
  • Lemma 2
  • Theorem 2
  • Theorem 3
  • Lemma 3
  • Remark 2
  • ...and 31 more