Table of Contents
Fetching ...
Paper

A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Code Smell Detection

Abstract

Automated code smell detection faces persistent challenges due to the subjectivity of heuristic rules and the limited performance of traditional ML/DL models. While Large Language Models (LLMs) offer a promising alternative, their adoption is impeded by high fine-tuning costs and a lack of "LM-ready" benchmarks. To bridge these gaps, we present a study with two synergistic contributions. First, we constructed a high-quality benchmark for Complex Conditional, Complex Method, Feature Envy, and Data Class, validated through a rigorous two-stage manual review. Second, leveraging this benchmark, we systematically evaluated four Parameter-Efficient Fine-Tuning (PEFT) methods across nine LMs of varying parameter sizes. Their performance is compared against a comprehensive suite of baselines, including heuristics-based detectors, Deep Learning (DL)-based approaches, and state-of-the-art general-purpose LLMs under multiple In-Context Learning (ICL) settings. Our results demonstrate that PEFT methods achieve effectiveness comparable to or surpassing full fine-tuning while substantially reducing peak GPU memory usage for code smell detection. Furthermore, PEFT-tuned LMs consistently outperform all baselines, yielding MCC improvements ranging from 0.33% to 13.69%, with particularly notable gains for specific smell categories. These findings highlight PEFT techniques as effective and scalable solutions for advancing code smell detection.