Table of Contents
Fetching ...

Evolution Strategies for Deep RL pretraining

Adrian Martínez, Ananya Gupta, Hanka Goralija, Mario Rico, Saúl Fenollosa, Tamar Alphaidze

Abstract

Although Deep Reinforcement Learning has proven highly effective for complex decision-making problems, it demands significant computational resources and careful parameter adjustment in order to develop successful strategies. Evolution strategies offer a more straightforward, derivative-free approach that is less computationally costly and simpler to deploy. However, ES generally do not match the performance levels achieved by DRL, which calls into question their suitability for more demanding scenarios. This study examines the performance of ES and DRL across tasks of varying difficulty, including Flappy Bird, Breakout and Mujoco environments, as well as whether ES could be used for initial training to enhance DRL algorithms. The results indicate that ES do not consistently train faster than DRL. When used as a preliminary training step, they only provide benefits in less complex environments (Flappy Bird) and show minimal or no improvement in training efficiency or stability across different parameter settings when applied to more sophisticated tasks (Breakout and MuJoCo Walker).

Evolution Strategies for Deep RL pretraining

Abstract

Although Deep Reinforcement Learning has proven highly effective for complex decision-making problems, it demands significant computational resources and careful parameter adjustment in order to develop successful strategies. Evolution strategies offer a more straightforward, derivative-free approach that is less computationally costly and simpler to deploy. However, ES generally do not match the performance levels achieved by DRL, which calls into question their suitability for more demanding scenarios. This study examines the performance of ES and DRL across tasks of varying difficulty, including Flappy Bird, Breakout and Mujoco environments, as well as whether ES could be used for initial training to enhance DRL algorithms. The results indicate that ES do not consistently train faster than DRL. When used as a preliminary training step, they only provide benefits in less complex environments (Flappy Bird) and show minimal or no improvement in training efficiency or stability across different parameter settings when applied to more sophisticated tasks (Breakout and MuJoCo Walker).

Paper Structure

This paper contains 29 sections, 11 equations, 3 figures, 2 algorithms.

Figures (3)

  • Figure 1: Smoothed learning curves for ES, DQN, and ES-pretrained DQN versus cumulative training time in Flappy Bird environment
  • Figure 2: Smoothed learning curves for ES and DQN versus cumulative training time in Breakout environment
  • Figure 3: Performance comparison of ES, PPO, and ES pretraining across various Mujoco environments.