Table of Contents
Fetching ...

BACE: LLM-based Code Generation through Bayesian Anchored Co-Evolution of Code and Test Populations

Kaushitha Silva, Srinath Perera

Abstract

Large Language Models (LLMs) have demonstrated impressive capabilities in code generation. While an interactive feedback loop can improve performance, writing effective tests is a non-trivial task. Early multi-agent frameworks, such as AgentCoder, automated this process but relied on generated tests as absolute ground truth. This approach is fragile: incorrect code frequently passes faulty or trivial tests, while valid solutions are often degraded to satisfy incorrect assertions. Addressing this limitation, newer methods have largely abandoned test generation in favor of planning and reasoning based on examples. We argue, however, that generated tests remain a valuable signal if we model them as noisy sensors guided by bayesian updates. To this end, we introduce BACE (Bayesian Anchored Co-Evolution), a framework that reformulates synthesis as a Bayesian co-evolutionary process where code and test populations are evolved, guided by belief distributions that are reciprocally updated based on noisy interaction evidence. By anchoring this search on minimal public examples, BACE prevents the co-evolutionary drift typical of self-validating loops. Extensive evaluations on LiveCodeBench v6 (post-March 2025) reveal that BACE achieves superior performance across both proprietary models and open-weight small language models.

BACE: LLM-based Code Generation through Bayesian Anchored Co-Evolution of Code and Test Populations

Abstract

Large Language Models (LLMs) have demonstrated impressive capabilities in code generation. While an interactive feedback loop can improve performance, writing effective tests is a non-trivial task. Early multi-agent frameworks, such as AgentCoder, automated this process but relied on generated tests as absolute ground truth. This approach is fragile: incorrect code frequently passes faulty or trivial tests, while valid solutions are often degraded to satisfy incorrect assertions. Addressing this limitation, newer methods have largely abandoned test generation in favor of planning and reasoning based on examples. We argue, however, that generated tests remain a valuable signal if we model them as noisy sensors guided by bayesian updates. To this end, we introduce BACE (Bayesian Anchored Co-Evolution), a framework that reformulates synthesis as a Bayesian co-evolutionary process where code and test populations are evolved, guided by belief distributions that are reciprocally updated based on noisy interaction evidence. By anchoring this search on minimal public examples, BACE prevents the co-evolutionary drift typical of self-validating loops. Extensive evaluations on LiveCodeBench v6 (post-March 2025) reveal that BACE achieves superior performance across both proprietary models and open-weight small language models.

Paper Structure

This paper contains 49 sections, 6 equations, 3 figures, 1 table, 2 algorithms.

Figures (3)

  • Figure 1: Systemic failure modes in deterministic test-generation architectures. (a) False positives occur when incorrect code satisfies faulty tests. (b) False negatives cause valid logic to be rejected and subsequently degraded to satisfy incorrect assertions.
  • Figure 2: Functional Equivalence (Blue): Candidates $c_3$ and $c_4$ produce identical pass/fail vector across all tests. Functional Redundancy (Orange): Tests $t_4$ and $t_5$ induce identical pass/fail vector across all code candidates.
  • Figure 3: Ancestral Lineage of a Solution. An illustrative visualization of the BACE co-evolutionary process, tracing the genesis of a candidate solution ($c_4$)