Is Vibe Coding Safe? Benchmarking Vulnerability of Agent-Generated Code in Real-World Tasks
Songwen Zhao, Danqing Wang, Kexun Zhang, Jiaxuan Luo, Zhuo Li, Lei Li
TL;DR
The paper investigates the safety of vibe-coded AI-generated software by introducing SusVibes, a repository-scale benchmark with $200$ vulnerability-fixing tasks from real-world open-source projects to jointly evaluate functionality and security. It surveys and benchmarks multiple agent frameworks and LLM backbones (e.g., SWE-Agent, OpenHands, Claude Code; Claude 4 Sonnet, Kimi K2, Gemini 2.5 Pro) with a fully automatic pipeline that constructs multi-file, multi-turn tasks and runs runtime tests across $77$ CWEs, using functional and security metrics (FuncPass and SecPass) with a $200$-step cap. Key results show that while functional success can exceed $50\%$ in some configurations (best around $61\%$), SecPass hovers near $10\%$, and approximately $82.8\%$ of functionally correct solutions are insecure, with security performance varying by CWE and repository. The authors present SusVibes as both a diagnostic benchmark revealing a persistent security gap in vibe coding and a scalable, automatic pipeline intended as a baseline for future improvements, arguing that security must be treated as a first-class objective in real-world deployments.
Abstract
Vibe coding is a new programming paradigm in which human engineers instruct large language model (LLM) agents to complete complex coding tasks with little supervision. Although it is increasingly adopted, are vibe coding outputs really safe to deploy in production? To answer this question, we propose SU S VI B E S, a benchmark consisting of 200 feature-request software engineering tasks from real-world open-source projects, which, when given to human programmers, led to vulnerable implementations. We evaluate multiple widely used coding agents with frontier models on this benchmark. Disturbingly, all agents perform poorly in terms of software security. Although 61% of the solutions from SWE-Agent with Claude 4 Sonnet are functionally correct, only 10.5% are secure. Further experiments demonstrate that preliminary security strategies, such as augmenting the feature request with vulnerability hints, cannot mitigate these security issues. Our findings raise serious concerns about the widespread adoption of vibe-coding, particularly in security-sensitive applications.
