Table of Contents
Fetching ...

D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay for Stable Reinforcement Learning in Robotic Manipulation

Yu Zhang, Karl Mason

Abstract

Robotic manipulation remains challenging for reinforcement learning due to contact-rich dynamics, long horizons, and training instability. Although off-policy actor-critic algorithms such as SAC and TD3 perform well in simulation, they often suffer from policy oscillations and performance collapse in realistic settings, partly due to experience replay strategies that ignore the differing data requirements of the actor and the critic. We propose D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay, a replay framework that decouples actor and critic sampling while maintaining a shared replay buffer. The critic leverages prioritized replay for efficient value learning, whereas the actor is updated using low-error transitions to stabilize policy optimization. An adaptive anchor mechanism balances uniform and prioritized sampling based on the coefficient of variation of TD errors, and a Huber-based critic objective further improves robustness under heterogeneous reward scales. We evaluate D-SPEAR on challenging robotic manipulation tasks from the robosuite benchmark, including Block-Lifting and Door-Opening. Results demonstrate that D-SPEAR consistently outperforms strong off-policy baselines, including SAC, TD3, and DDPG, in both final performance and training stability, with ablation studies confirming the complementary roles of the actorside and critic-side replay streams.

D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay for Stable Reinforcement Learning in Robotic Manipulation

Abstract

Robotic manipulation remains challenging for reinforcement learning due to contact-rich dynamics, long horizons, and training instability. Although off-policy actor-critic algorithms such as SAC and TD3 perform well in simulation, they often suffer from policy oscillations and performance collapse in realistic settings, partly due to experience replay strategies that ignore the differing data requirements of the actor and the critic. We propose D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay, a replay framework that decouples actor and critic sampling while maintaining a shared replay buffer. The critic leverages prioritized replay for efficient value learning, whereas the actor is updated using low-error transitions to stabilize policy optimization. An adaptive anchor mechanism balances uniform and prioritized sampling based on the coefficient of variation of TD errors, and a Huber-based critic objective further improves robustness under heterogeneous reward scales. We evaluate D-SPEAR on challenging robotic manipulation tasks from the robosuite benchmark, including Block-Lifting and Door-Opening. Results demonstrate that D-SPEAR consistently outperforms strong off-policy baselines, including SAC, TD3, and DDPG, in both final performance and training stability, with ablation studies confirming the complementary roles of the actorside and critic-side replay streams.

Paper Structure

This paper contains 26 sections, 9 equations, 4 figures, 3 tables, 1 algorithm.

Figures (4)

  • Figure 1: Overview of the proposed D-SPEAR: Dual-Stream Prioritized Experience Adaptive Replay (D-SPEAR). The replay buffer is decomposed into an anchor set sampled uniformly and two prioritized streams: a high-TD stream for critic updates and a low-TD inverse-priority stream for actor updates. An adaptive controller adjusts the anchor ratio $\lambda$ based on the coefficient of variation (CV) of TD errors.
  • Figure 2: Robotic manipulation environments used in our experiments. (a) Lift: object grasping and lifting. (b) Door: contact-rich articulated door opening.
  • Figure 3: Performance comparison on robosuite manipulation tasks. Episode return as a function of environment steps on Lift and Door. Solid lines denote the mean performance over $5$ random seeds, and shaded regions indicate one standard deviation.
  • Figure 4: Ablation study on robosuite manipulation tasks. We compare the full D-SPEAR framework with variants that remove key components: w/o dual-stream, w/o low-actor, and w/o high-critic. Results are averaged over $5$ random seeds, with shaded regions indicating one standard deviation.