Table of Contents
Fetching ...

COvolve: Adversarial Co-Evolution of Large-Language-Model-Generated Policies and Environments via Two-Player Zero-Sum Game

Alkis Sygkounas, Rishi Hazra, Andreas Persson, Pedro Zuidberg Dos Martires, Amy Loutfi

Abstract

A central challenge in building continually improving agents is that training environments are typically static or manually constructed. This restricts continual learning and generalization beyond the training distribution. We address this with COvolve, a co-evolutionary framework that leverages large language models (LLMs) to generate both environments and agent policies, expressed as executable Python code. We model the interaction between environment and policy designers as a two-player zero-sum game, ensuring adversarial co-evolution in which environments expose policy weaknesses and policies adapt in response. This process induces an automated curriculum in which environments and policies co-evolve toward increasing complexity. To guarantee robustness and prevent forgetting as the curriculum progresses, we compute the mixed-strategy Nash equilibrium (MSNE) of the zero-sum game, thereby yielding a meta-policy. This MSNE meta-policy ensures that the agent does not forget to solve previously seen environments while learning to solve previously unseen ones. Experiments in urban driving, symbolic maze-solving, and geometric navigation showcase that COvolve produces progressively more complex environments. Our results demonstrate the potential of LLM-driven co-evolution to achieve open-ended learning without predefined task distributions or manual intervention.

COvolve: Adversarial Co-Evolution of Large-Language-Model-Generated Policies and Environments via Two-Player Zero-Sum Game

Abstract

A central challenge in building continually improving agents is that training environments are typically static or manually constructed. This restricts continual learning and generalization beyond the training distribution. We address this with COvolve, a co-evolutionary framework that leverages large language models (LLMs) to generate both environments and agent policies, expressed as executable Python code. We model the interaction between environment and policy designers as a two-player zero-sum game, ensuring adversarial co-evolution in which environments expose policy weaknesses and policies adapt in response. This process induces an automated curriculum in which environments and policies co-evolve toward increasing complexity. To guarantee robustness and prevent forgetting as the curriculum progresses, we compute the mixed-strategy Nash equilibrium (MSNE) of the zero-sum game, thereby yielding a meta-policy. This MSNE meta-policy ensures that the agent does not forget to solve previously seen environments while learning to solve previously unseen ones. Experiments in urban driving, symbolic maze-solving, and geometric navigation showcase that COvolve produces progressively more complex environments. Our results demonstrate the potential of LLM-driven co-evolution to achieve open-ended learning without predefined task distributions or manual intervention.

Paper Structure

This paper contains 63 sections, 7 equations, 15 figures, 5 tables, 1 algorithm.

Figures (15)

  • Figure 1: A conceptual overview of the proposed COvolve, comprised of an Environment Designer and a Policy Designer that co-evolve by playing a two-player zero-sum game. The Environment Designer generates increasingly challenging environments (as code), while the Policy Designer creates policies (as code) to solve them. A mixed-strategy Nash equilibrium enables robust, open-ended learning through continual adaptation.
  • Figure 2: Example of co evolution in COvolve. Left: successive environment implementations, progressing from ad hoc generation to a structured, parameterized design with explicit solvability checks and controllable chokepoints. Right: successive policy implementations, progressing from basic navigation to improved handling of keys and doors, together with refinements to the A* based planner for more reliable action selection. Highlighted code blocks indicate the changes introduced.
  • Figure 3: A selection of evolved MiniGrid environments produced by COvolve. Complexity increases from empty grids to larger mazes with dense walls and locked doors requiring corresponding keys. The agent must reach the green goal tile, often by planning multi-step sequences of key retrieval and door unlocking.
  • Figure 4: A selection of evolved PyGame environments produced by COvolve. Tasks progress from open arenas to cluttered maps with dense obstacles and narrow corridors. The agent must reach the rectangular goal zone while navigating collision-free paths through increasingly constrained layouts.
  • Figure 5: Selected CARLA environments produced by COvolve. Tasks progress from urban driving on empty roads to crowded streets with increasingly aggressive actor behaviors. The task for the agent is to drive along the street while following traffic rules (such as stopping at red lights), and at the same time, adjust to increasingly unpredictable behaviors of fellow drivers and pedestrians.
  • ...and 10 more figures