Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs
Lei Zhang, Yunshui Li, Jiaming Li, Xiaobo Xia, Jiaxi Yang, Run Luo, Minzheng Wang, Longze Chen, Junhao Liu, Min Yang
TL;DR
This work addresses the challenge of real-world code completion with repository-scale pre-trained LLMs, where raw repository concatenation exceeds model context windows. It introduces Hierarchical Context Pruning (HCP), which models the repository at the function level and preserves file dependencies while pruning irrelevant content, dramatically reducing prompt length from over 50,000 tokens to roughly 8,000. Across six Repo-Code LLMs, HCP improves completion accuracy by leveraging topological dependencies and selective cross-file information, outperforming naive concatenation and RAG baselines. The approach combines fine-grained repository modeling via Tree-Sitter, embedding-based function-level sampling, and top-k/top-p pruning to yield efficient, high-signal prompts with practical implications for repo-aware code completion in production settings.
Abstract
Some recently developed code large language models (Code LLMs) have been pre-trained on repository-level code data (Repo-Code LLMs), enabling these models to recognize repository structures and utilize cross-file information for code completion. However, in real-world development scenarios, simply concatenating the entire code repository often exceeds the context window limits of these Repo-Code LLMs, leading to significant performance degradation. In this study, we conducted extensive preliminary experiments and analyses on six Repo-Code LLMs. The results indicate that maintaining the topological dependencies of files and increasing the code file content in the completion prompts can improve completion accuracy; pruning the specific implementations of functions in all dependent files does not significantly reduce the accuracy of completions. Based on these findings, we proposed a strategy named Hierarchical Context Pruning (HCP) to construct completion prompts with high informational code content. The HCP models the code repository at the function level, maintaining the topological dependencies between code files while removing a large amount of irrelevant code content, significantly reduces the input length for repository-level code completion. We applied the HCP strategy in experiments with six Repo-Code LLMs, and the results demonstrate that our proposed method can significantly enhance completion accuracy while substantially reducing the length of input. Our code and data are available at https://github.com/Hambaobao/HCP-Coder.
