Table of Contents
Fetching ...

Persistent Cross-Attempt State Optimization for Repository-Level Code Generation

Ruwei Pan, Jiangshuai Wang, Qisheng Zhang, Yueheng Zhu, Linhao Wu, Zixiong Yang, Yakun Zhang, Lu Zhang, Hongyu Zhang

Abstract

Large language models (LLMs) have achieved substantial progress in repository-level code generation. However, solving the same repository-level task often requires multiple attempts, while existing methods still optimize each attempt in isolation and do not preserve or reuse task-specific state across attempts. In this paper, we propose LiveCoder, a novel framework for repository-level code generation based on cross-attempt knowledge optimization. LiveCoder maintains persistent task-specific state from prior attempts to guide subsequent generation. This state includes success knowledge, which captures reusable signals from previously strong repositories, failure knowledge, which records unsuccessful outcomes and their diagnostic signals, and a historical-best repository, which preserves the strongest result found so far and prevents regression. These components collectively transform repeated repository generation into a persistent, knowledge-driven optimization process. We evaluate LiveCoder using four frontier LLMs on two representative repository-level code generation benchmarks. Extensive experimental results demonstrate the effectiveness and efficiency of LiveCoder, improving the functional score by up to 22.94 percentage points, increasing repository reuse to 81.58%, and reducing cost by up to 53.63% on RAL-Bench while maintaining broadly stable non-functional quality.

Persistent Cross-Attempt State Optimization for Repository-Level Code Generation

Abstract

Large language models (LLMs) have achieved substantial progress in repository-level code generation. However, solving the same repository-level task often requires multiple attempts, while existing methods still optimize each attempt in isolation and do not preserve or reuse task-specific state across attempts. In this paper, we propose LiveCoder, a novel framework for repository-level code generation based on cross-attempt knowledge optimization. LiveCoder maintains persistent task-specific state from prior attempts to guide subsequent generation. This state includes success knowledge, which captures reusable signals from previously strong repositories, failure knowledge, which records unsuccessful outcomes and their diagnostic signals, and a historical-best repository, which preserves the strongest result found so far and prevents regression. These components collectively transform repeated repository generation into a persistent, knowledge-driven optimization process. We evaluate LiveCoder using four frontier LLMs on two representative repository-level code generation benchmarks. Extensive experimental results demonstrate the effectiveness and efficiency of LiveCoder, improving the functional score by up to 22.94 percentage points, increasing repository reuse to 81.58%, and reducing cost by up to 53.63% on RAL-Bench while maintaining broadly stable non-functional quality.

Paper Structure

This paper contains 17 sections, 1 equation, 5 figures, 5 tables, 1 algorithm.

Figures (5)

  • Figure 1: A motivating example of repeated repository-level code generation on the same problem. Existing methods may treat repeated attempts as isolated retries and regress below earlier stronger results, whereas LiveCoder preserves task-specific state across attempts and protects the historical-best repository.
  • Figure 2: Overview of LiveCoder. For each repository-level task, LiveCoder maintains a persistent task-specific state consisting of Success Knowledge, Failure Knowledge, and the historical-best repository. This state guides subsequent attempts, while historical-best preservation prevents regression below the strongest repository found so far.
  • Figure 3: Example Success Knowledge entry for a Stegano task.
  • Figure 4: Example Failure Knowledge entry for a Stegano task.
  • Figure 5: Distribution of residual functional failures after knowledge evolution across attempts.