Table of Contents
Fetching ...

Executing as You Generate: Hiding Execution Latency in LLM Code Generation

Zhensu Sun, Zhihao Lin, Zhi Chen, Chengran Yang, Mingyi Zhou, Li Li, David Lo

Abstract

Current LLM-based coding agents follow a serial execution paradigm: the model first generates the complete code, then invokes an interpreter to execute it. This sequential workflow leaves the executor idle during generation and the generator idle during execution, resulting in unnecessary end-to-end latency. We observe that, unlike human developers, LLMs produce code tokens sequentially without revision, making it possible to execute code as it is being generated. We formalize this parallel execution paradigm, modeling it as a three-stage pipeline of generation, detection, and execution, and derive closed-form latency bounds that characterize its speedup potential and operating regimes. We then present Eager, a concrete implementation featuring AST-based chunking, dynamic batching with gated execution, and early error interruption. We evaluate Eager across four benchmarks, seven LLMs, and three execution environments. Results show that Eager reduces the non-overlapped execution latency by up to 99.9% and the end-to-end latency by up to 55% across seven LLMs and four benchmarks.

Executing as You Generate: Hiding Execution Latency in LLM Code Generation

Abstract

Current LLM-based coding agents follow a serial execution paradigm: the model first generates the complete code, then invokes an interpreter to execute it. This sequential workflow leaves the executor idle during generation and the generator idle during execution, resulting in unnecessary end-to-end latency. We observe that, unlike human developers, LLMs produce code tokens sequentially without revision, making it possible to execute code as it is being generated. We formalize this parallel execution paradigm, modeling it as a three-stage pipeline of generation, detection, and execution, and derive closed-form latency bounds that characterize its speedup potential and operating regimes. We then present Eager, a concrete implementation featuring AST-based chunking, dynamic batching with gated execution, and early error interruption. We evaluate Eager across four benchmarks, seven LLMs, and three execution environments. Results show that Eager reduces the non-overlapped execution latency by up to 99.9% and the end-to-end latency by up to 55% across seven LLMs and four benchmarks.

Paper Structure

This paper contains 40 sections, 15 equations, 3 figures, 3 tables.

Figures (3)

  • Figure 1: An illustrative example comparing Serial Execution and Parallel Execution. For a code snippet with four chunks, Parallel Execution overlaps the first three chunks with the generation process, saving the corresponding waiting time.
  • Figure 2: Architecture of Eager.
  • Figure 3: A real-world example of Eager on a DABench task (id: dabench_19) generated by DeepSeek-V3.2. The baseline (serial execution) completes in 8909 ms ($T_{\text{gen}}$ = 8560 ms + execution = 349 ms), while Eager overlaps most of execution chunks with generation, finishing in 8561 ms and saving 348 ms. The green bars indicate individual chunk executions running concurrently with token generation.