Table of Contents
Fetching ...

Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions

Daniel Bloch

Abstract

This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass linear evaluation significantly reduces computational complexity and variance. We prove that this framework preserves fundamental contraction properties and ensures stable generalisation even in the presence of heavy-tailed noise. Our results demonstrate that by grounding reinforcement learning in the topological features of path-space, agents can achieve proactive risk management and superior policy stability in highly volatile, continuous-time environments.

Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions

Abstract

This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass linear evaluation significantly reduces computational complexity and variance. We prove that this framework preserves fundamental contraction properties and ensures stable generalisation even in the presence of heavy-tailed noise. Our results demonstrate that by grounding reinforcement learning in the topological features of path-space, agents can achieve proactive risk management and superior policy stability in highly volatile, continuous-time environments.

Paper Structure

This paper contains 68 sections, 25 theorems, 121 equations, 1 table.

Key Result

Lemma 2.1

The distributional Bellman operator $\mathcal{T}^\pi$ is a contraction mapping in the $p$-Wasserstein distance $w_p$ for $p \ge 1$ over the space of measures with bounded $p$-th moments. $\blacktriangleleft$$\blacktriangleleft$

Theorems & Definitions (65)

  • Definition 2.1: Filtered Proxy and Jump-Flow Latent Propagation
  • Definition 2.2: Anticipatory Latent Propagation
  • Definition 2.3: Anticipatory Generative Flow
  • Definition 2.4: Policy and Expected Return
  • Definition 2.5: State-Value and Action-Value Functions
  • Definition 2.6: Value Distribution
  • Definition 2.7: Distributional Bellman Operator
  • Lemma 2.1: $w_p$-Contraction
  • Proposition 1: Optimal Distributional Operator
  • Proposition 2: Signature-Linear Reward Approximation
  • ...and 55 more