Table of Contents
Fetching ...

Search-Induced Issues in Web-Augmented LLM Code Generation: Detecting and Repairing Error-Inducing Pages

Guoqing Wang, Zeyu Sun, Xiaofei Xie, Yizhou Chen, Yanchao Tan, Yifan Zhao, Dan Hao

Abstract

Web-augmented large language models (LLMs) offer promising capabilities for automatic code generation. However, integrating live web search exposes models to unreliable or malicious content, leading to Search-Induced Issues (SII), a novel failure mode in which external pages mislead LLMs into producing incorrect code. This paper presents a comprehensive empirical study of the prevalence and impact of SII across three commercial search APIs and six advanced LLMs. Our analysis reveals that all evaluated web-augmented LLMs are vulnerable to SII, with root causes arising from either misaligned specifications or flawed code implementations in the searched Error-Inducing Pages (EIPs). To address this challenge, we propose Sherlock, an automated framework that enables LLM service providers to proactively safeguard web-augmented generation systems at scale. Sherlock operates as a continuous pipeline that first detects potential SII instances, then debugs them to identify the responsible EIPs and pinpoint their root causes, and finally repairs them by either annotating misaligned content or replacing erroneous code snippets with evaluated solutions from trusted sources. Experiments show that Sherlock identifies EIPs with an F1 score of up to 95% and repairs 71% to 100% of affected generations across the evaluated models, with modest computational overhead. Our findings and framework provide practical guidance for improving the reliability of web-augmented LLM-based code generation systems in real-world software engineering scenarios.

Search-Induced Issues in Web-Augmented LLM Code Generation: Detecting and Repairing Error-Inducing Pages

Abstract

Web-augmented large language models (LLMs) offer promising capabilities for automatic code generation. However, integrating live web search exposes models to unreliable or malicious content, leading to Search-Induced Issues (SII), a novel failure mode in which external pages mislead LLMs into producing incorrect code. This paper presents a comprehensive empirical study of the prevalence and impact of SII across three commercial search APIs and six advanced LLMs. Our analysis reveals that all evaluated web-augmented LLMs are vulnerable to SII, with root causes arising from either misaligned specifications or flawed code implementations in the searched Error-Inducing Pages (EIPs). To address this challenge, we propose Sherlock, an automated framework that enables LLM service providers to proactively safeguard web-augmented generation systems at scale. Sherlock operates as a continuous pipeline that first detects potential SII instances, then debugs them to identify the responsible EIPs and pinpoint their root causes, and finally repairs them by either annotating misaligned content or replacing erroneous code snippets with evaluated solutions from trusted sources. Experiments show that Sherlock identifies EIPs with an F1 score of up to 95% and repairs 71% to 100% of affected generations across the evaluated models, with modest computational overhead. Our findings and framework provide practical guidance for improving the reliability of web-augmented LLM-based code generation systems in real-world software engineering scenarios.

Paper Structure

This paper contains 19 sections, 4 figures, 4 tables.

Figures (4)

  • Figure 1: A motivating example of failure caused by SII
  • Figure 2: Two types of EIPs identified in preliminary study.
  • Figure 3: An overview of Sherlock's automated pipeline for LLM service providers.
  • Figure 4: A representative failure case of Sherlock.