Table of Contents
Fetching ...

Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs

Lei Zhang, Yunshui Li, Jiaming Li, Xiaobo Xia, Jiaxi Yang, Run Luo, Minzheng Wang, Longze Chen, Junhao Liu, Min Yang

TL;DR

This work addresses the challenge of real-world code completion with repository-scale pre-trained LLMs, where raw repository concatenation exceeds model context windows. It introduces Hierarchical Context Pruning (HCP), which models the repository at the function level and preserves file dependencies while pruning irrelevant content, dramatically reducing prompt length from over 50,000 tokens to roughly 8,000. Across six Repo-Code LLMs, HCP improves completion accuracy by leveraging topological dependencies and selective cross-file information, outperforming naive concatenation and RAG baselines. The approach combines fine-grained repository modeling via Tree-Sitter, embedding-based function-level sampling, and top-k/top-p pruning to yield efficient, high-signal prompts with practical implications for repo-aware code completion in production settings.

Abstract

Some recently developed code large language models (Code LLMs) have been pre-trained on repository-level code data (Repo-Code LLMs), enabling these models to recognize repository structures and utilize cross-file information for code completion. However, in real-world development scenarios, simply concatenating the entire code repository often exceeds the context window limits of these Repo-Code LLMs, leading to significant performance degradation. In this study, we conducted extensive preliminary experiments and analyses on six Repo-Code LLMs. The results indicate that maintaining the topological dependencies of files and increasing the code file content in the completion prompts can improve completion accuracy; pruning the specific implementations of functions in all dependent files does not significantly reduce the accuracy of completions. Based on these findings, we proposed a strategy named Hierarchical Context Pruning (HCP) to construct completion prompts with high informational code content. The HCP models the code repository at the function level, maintaining the topological dependencies between code files while removing a large amount of irrelevant code content, significantly reduces the input length for repository-level code completion. We applied the HCP strategy in experiments with six Repo-Code LLMs, and the results demonstrate that our proposed method can significantly enhance completion accuracy while substantially reducing the length of input. Our code and data are available at https://github.com/Hambaobao/HCP-Coder.

Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs

TL;DR

This work addresses the challenge of real-world code completion with repository-scale pre-trained LLMs, where raw repository concatenation exceeds model context windows. It introduces Hierarchical Context Pruning (HCP), which models the repository at the function level and preserves file dependencies while pruning irrelevant content, dramatically reducing prompt length from over 50,000 tokens to roughly 8,000. Across six Repo-Code LLMs, HCP improves completion accuracy by leveraging topological dependencies and selective cross-file information, outperforming naive concatenation and RAG baselines. The approach combines fine-grained repository modeling via Tree-Sitter, embedding-based function-level sampling, and top-k/top-p pruning to yield efficient, high-signal prompts with practical implications for repo-aware code completion in production settings.

Abstract

Some recently developed code large language models (Code LLMs) have been pre-trained on repository-level code data (Repo-Code LLMs), enabling these models to recognize repository structures and utilize cross-file information for code completion. However, in real-world development scenarios, simply concatenating the entire code repository often exceeds the context window limits of these Repo-Code LLMs, leading to significant performance degradation. In this study, we conducted extensive preliminary experiments and analyses on six Repo-Code LLMs. The results indicate that maintaining the topological dependencies of files and increasing the code file content in the completion prompts can improve completion accuracy; pruning the specific implementations of functions in all dependent files does not significantly reduce the accuracy of completions. Based on these findings, we proposed a strategy named Hierarchical Context Pruning (HCP) to construct completion prompts with high informational code content. The HCP models the code repository at the function level, maintaining the topological dependencies between code files while removing a large amount of irrelevant code content, significantly reduces the input length for repository-level code completion. We applied the HCP strategy in experiments with six Repo-Code LLMs, and the results demonstrate that our proposed method can significantly enhance completion accuracy while substantially reducing the length of input. Our code and data are available at https://github.com/Hambaobao/HCP-Coder.

Paper Structure

This paper contains 59 sections, 7 equations, 13 figures, 12 tables.

Figures (13)

  • Figure 1: The error class distribution of the completion results of the DeepseekCoder, Starcoder2 and CodeGemma models on the CrossCodeEval: Python benchmark.
  • Figure 2: The distribution of tokenized prompt lengths in the CrossCodeEval benchmark. The x-aixs represents the dependent level, and the y-axis represents the number of tokens.
  • Figure 3: The framework of hierarchical context pruning for improving the performance of code large language models in real-world code completion tasks.
  • Figure 4: left: Comparison of completion results using random-all and the hierarchical context pruning across six models. middle: Comparison of throughput using random-all and the hierarchical context pruning across six models. right: Comparison of prompt length using random-all and the hierarchical context pruning of different top-p values (top-k=5).
  • Figure 5: An example of redundant content generation error.
  • ...and 8 more figures