Table of Contents
Fetching ...

Diagnosing Non-Markovian Observations in Reinforcement Learning via Prediction-Based Violation Scoring

Naveen Mysore

Abstract

Reinforcement learning algorithms assume that observations satisfy the Markov property, yet real-world sensors frequently violate this assumption through correlated noise, latency, or partial observability. Standard performance metrics conflate Markov breakdowns with other sources of suboptimality, leaving practitioners without diagnostic tools for such violations. This paper introduces a prediction-based scoring method that quantifies non-Markovian structure in observation trajectories. A random forest first removes nonlinear Markov-compliant dynamics; ridge regression then tests whether historical observations reduce prediction error on the residuals beyond what the current observation provides. The resulting score is bounded in [0, 1] and requires no causal graph construction. Evaluation spans six environments (CartPole, Pendulum, Acrobot, HalfCheetah, Hopper, Walker2d), three algorithms (PPO, A2C, SAC), controlled AR(1) noise at six intensity levels, and 10 seeds per condition. In post-hoc detection, 7 of 16 environment-algorithm pairs, primarily high-dimensional locomotion tasks, show significant positive monotonicity between noise intensity and the violation score (Spearman rho up to 0.78, confirmed under repeated-measures analysis); under training-time noise, 13 of 16 pairs exhibit statistically significant reward degradation. An inversion phenomenon is documented in low-dimensional environments where the random forest absorbs the noise signal, causing the score to decrease as true violations grow, a failure mode analyzed in detail. A practical utility experiment demonstrates that the proposed score correctly identifies partial observability and guides architecture selection, fully recovering performance lost to non-Markovian observations. Source code to reproduce all results is provided at https://github.com/NAVEENMN/Markovianes.

Diagnosing Non-Markovian Observations in Reinforcement Learning via Prediction-Based Violation Scoring

Abstract

Reinforcement learning algorithms assume that observations satisfy the Markov property, yet real-world sensors frequently violate this assumption through correlated noise, latency, or partial observability. Standard performance metrics conflate Markov breakdowns with other sources of suboptimality, leaving practitioners without diagnostic tools for such violations. This paper introduces a prediction-based scoring method that quantifies non-Markovian structure in observation trajectories. A random forest first removes nonlinear Markov-compliant dynamics; ridge regression then tests whether historical observations reduce prediction error on the residuals beyond what the current observation provides. The resulting score is bounded in [0, 1] and requires no causal graph construction. Evaluation spans six environments (CartPole, Pendulum, Acrobot, HalfCheetah, Hopper, Walker2d), three algorithms (PPO, A2C, SAC), controlled AR(1) noise at six intensity levels, and 10 seeds per condition. In post-hoc detection, 7 of 16 environment-algorithm pairs, primarily high-dimensional locomotion tasks, show significant positive monotonicity between noise intensity and the violation score (Spearman rho up to 0.78, confirmed under repeated-measures analysis); under training-time noise, 13 of 16 pairs exhibit statistically significant reward degradation. An inversion phenomenon is documented in low-dimensional environments where the random forest absorbs the noise signal, causing the score to decrease as true violations grow, a failure mode analyzed in detail. A practical utility experiment demonstrates that the proposed score correctly identifies partial observability and guides architecture selection, fully recovering performance lost to non-Markovian observations. Source code to reproduce all results is provided at https://github.com/NAVEENMN/Markovianes.

Paper Structure

This paper contains 48 sections, 9 equations, 3 figures, 5 tables.

Figures (3)

  • Figure 1: Phase 1: Violation score vs. noise intensity. Each panel shows one environment; lines represent different algorithms. Error bars are 95% CIs over 10 seeds. The score increases monotonically with $\alpha$ in HalfCheetah and CartPole. In Pendulum, Hopper (PPO/SAC), and Acrobot it decreases---the inversion phenomenon discussed in Section \ref{['subsec:inversion']}.
  • Figure 2: Phase 2: Reward vs. noise intensity. AR(1) noise during training degrades final performance across nearly all conditions. The worst collapses: HalfCheetah-SAC drops from 8920 to $-42$; Walker2d-SAC from 4244 to 318; CartPole-PPO from 500 to 53.
  • Figure 3: Combined: Violation score vs. reward ratio. Each point is one environment--algorithm--noise-level condition. In environments where the score correctly tracks violations (e.g., HalfCheetah, CartPole), higher scores correspond to lower reward. Inverted pairs cluster near score $\approx 0$ regardless of reward loss.