Table of Contents
Fetching ...

PAFT: Preservation Aware Fine-Tuning for Minimal-Edit Program Repair

Boyang Yang, Zijian Cai, Shunfu Jin, Haoye Tian

Abstract

Large language models (LLMs) are effective for automated program repair, but plausible patches that pass the full test suite often rewrite more code than necessary, increasing review and maintenance costs. This over-editing is common because most bugs are localized, while standard supervised fine-tuning provides no explicit signal about which tokens should be preserved and which should be changed. We propose PAFT, a preservation-aware fine-tuning method for minimal-edit program repair. PAFT derives token-level preservation signals by aligning buggy and fixed code, combines them with full-sequence masking, and applies an edit-difficulty curriculum. Across Defects4J and HumanEval-Java, PAFT improves pass@1 by up to 65.6% over standard supervised fine-tuning (StdFT) while reducing average edit distance (AED) by up to 32.6%. On Defects4J with DeepSeek-Coder-6.7B, PAFT also outperforms AdaPatcher, a strong preference-based repair baseline, improving pass@1 from 5.9% to 10.1% while reducing median AED from 61.0 to 42.0. Overall, PAFT preserves stable context and concentrates edits on faulty regions, yielding smaller, more localized, plausible patches without inference-time search, reranking, or post-processing.

PAFT: Preservation Aware Fine-Tuning for Minimal-Edit Program Repair

Abstract

Large language models (LLMs) are effective for automated program repair, but plausible patches that pass the full test suite often rewrite more code than necessary, increasing review and maintenance costs. This over-editing is common because most bugs are localized, while standard supervised fine-tuning provides no explicit signal about which tokens should be preserved and which should be changed. We propose PAFT, a preservation-aware fine-tuning method for minimal-edit program repair. PAFT derives token-level preservation signals by aligning buggy and fixed code, combines them with full-sequence masking, and applies an edit-difficulty curriculum. Across Defects4J and HumanEval-Java, PAFT improves pass@1 by up to 65.6% over standard supervised fine-tuning (StdFT) while reducing average edit distance (AED) by up to 32.6%. On Defects4J with DeepSeek-Coder-6.7B, PAFT also outperforms AdaPatcher, a strong preference-based repair baseline, improving pass@1 from 5.9% to 10.1% while reducing median AED from 61.0 to 42.0. Overall, PAFT preserves stable context and concentrates edits on faulty regions, yielding smaller, more localized, plausible patches without inference-time search, reranking, or post-processing.

Paper Structure

This paper contains 27 sections, 10 equations, 3 figures, 10 tables.

Figures (3)

  • Figure 1: Motivating example of PAFT.
  • Figure 2: Overview of PAFT. We derive an alignment-guided preservation signal from $(p,b,f)$ and fine-tune a quantized code LLM with QLoRA using a preservation-aware token weighting and an edit-difficulty curriculum.
  • Figure 3: Distribution of AED values over plausible patches on Defects4J for DS-Coder-6.7B and its variants. The first row corresponds to the base DS-Coder-6.7B model. PAFT shifts the distribution toward smaller edits and achieves the lowest mean and median AED among all compared methods.