Table of Contents
Fetching ...

Is Vibe Coding Safe? Benchmarking Vulnerability of Agent-Generated Code in Real-World Tasks

Songwen Zhao, Danqing Wang, Kexun Zhang, Jiaxuan Luo, Zhuo Li, Lei Li

TL;DR

The paper investigates the safety of vibe-coded AI-generated software by introducing SusVibes, a repository-scale benchmark with $200$ vulnerability-fixing tasks from real-world open-source projects to jointly evaluate functionality and security. It surveys and benchmarks multiple agent frameworks and LLM backbones (e.g., SWE-Agent, OpenHands, Claude Code; Claude 4 Sonnet, Kimi K2, Gemini 2.5 Pro) with a fully automatic pipeline that constructs multi-file, multi-turn tasks and runs runtime tests across $77$ CWEs, using functional and security metrics (FuncPass and SecPass) with a $200$-step cap. Key results show that while functional success can exceed $50\%$ in some configurations (best around $61\%$), SecPass hovers near $10\%$, and approximately $82.8\%$ of functionally correct solutions are insecure, with security performance varying by CWE and repository. The authors present SusVibes as both a diagnostic benchmark revealing a persistent security gap in vibe coding and a scalable, automatic pipeline intended as a baseline for future improvements, arguing that security must be treated as a first-class objective in real-world deployments.

Abstract

Vibe coding is a new programming paradigm in which human engineers instruct large language model (LLM) agents to complete complex coding tasks with little supervision. Although it is increasingly adopted, are vibe coding outputs really safe to deploy in production? To answer this question, we propose SU S VI B E S, a benchmark consisting of 200 feature-request software engineering tasks from real-world open-source projects, which, when given to human programmers, led to vulnerable implementations. We evaluate multiple widely used coding agents with frontier models on this benchmark. Disturbingly, all agents perform poorly in terms of software security. Although 61% of the solutions from SWE-Agent with Claude 4 Sonnet are functionally correct, only 10.5% are secure. Further experiments demonstrate that preliminary security strategies, such as augmenting the feature request with vulnerability hints, cannot mitigate these security issues. Our findings raise serious concerns about the widespread adoption of vibe-coding, particularly in security-sensitive applications.

Is Vibe Coding Safe? Benchmarking Vulnerability of Agent-Generated Code in Real-World Tasks

TL;DR

The paper investigates the safety of vibe-coded AI-generated software by introducing SusVibes, a repository-scale benchmark with vulnerability-fixing tasks from real-world open-source projects to jointly evaluate functionality and security. It surveys and benchmarks multiple agent frameworks and LLM backbones (e.g., SWE-Agent, OpenHands, Claude Code; Claude 4 Sonnet, Kimi K2, Gemini 2.5 Pro) with a fully automatic pipeline that constructs multi-file, multi-turn tasks and runs runtime tests across CWEs, using functional and security metrics (FuncPass and SecPass) with a -step cap. Key results show that while functional success can exceed in some configurations (best around ), SecPass hovers near , and approximately of functionally correct solutions are insecure, with security performance varying by CWE and repository. The authors present SusVibes as both a diagnostic benchmark revealing a persistent security gap in vibe coding and a scalable, automatic pipeline intended as a baseline for future improvements, arguing that security must be treated as a first-class objective in real-world deployments.

Abstract

Vibe coding is a new programming paradigm in which human engineers instruct large language model (LLM) agents to complete complex coding tasks with little supervision. Although it is increasingly adopted, are vibe coding outputs really safe to deploy in production? To answer this question, we propose SU S VI B E S, a benchmark consisting of 200 feature-request software engineering tasks from real-world open-source projects, which, when given to human programmers, led to vulnerable implementations. We evaluate multiple widely used coding agents with frontier models on this benchmark. Disturbingly, all agents perform poorly in terms of software security. Although 61% of the solutions from SWE-Agent with Claude 4 Sonnet are functionally correct, only 10.5% are secure. Further experiments demonstrate that preliminary security strategies, such as augmenting the feature request with vulnerability hints, cannot mitigate these security issues. Our findings raise serious concerns about the widespread adoption of vibe-coding, particularly in security-sensitive applications.

Paper Structure

This paper contains 35 sections, 12 figures, 7 tables.

Figures (12)

  • Figure 2: Curation pipeline of mining open-source vulnerability commits, adaptively creating feature masks and task descriptions, and harnessing functionality and security tests. $\mathcal{C}_0$ is the vulnerability fixing commit, $\mathcal{C}_{-1}$ is the previous commit of $\mathcal{C}_0$, and $\mathcal{C}_{-1}^{\mathcal{M}}$ is the repository without feature implementation of $\mathcal{F}$. The security risk analysis in this example can be found in Section \ref{['experiment:case-study']}.
  • Figure 3: SWE-agent is used to create the feature mask $\mathcal{M}$ (left), the task description (middle), and task verification (right), via operating on a software repository.
  • Figure 4: Verification pipeline where each line of the canonical implementation of the feature containing security fixes, is justified with a requirement in the generated task description. This verification result provides feedback for adaptively adjusting the feature mask.
  • Figure 5: Distribution of SusVibes's 108 real-world GitHub project across diverse domains.
  • Figure 5: Impact of self-selection and oracle security strategies over the generic baseline. Both fail to improve the total secure solutions, while degrading functional performance.
  • ...and 7 more figures