Table of Contents
Fetching ...

AVDA: Autonomous Vibe Detection Authoring for Cybersecurity

Fatih Bulut, Carlo DePaolis, Raghav Batta, Anjali Mangal

Abstract

With the rapid advancement of AI in code generation, cybersecurity detection engineering faces new opportunities to automate traditionally manual processes. Detection authoring - the practice of creating executable logic that identifies malicious activities from security telemetry - is hindered by fragmented code across repositories, duplication, and limited organizational visibility. Current workflows remain heavily manual, constraining both coverage and velocity. In this paper, we introduce AVDA, a framework that leverages the Model Context Protocol (MCP) to automate detection authoring by integrating organizational context - existing detections, telemetry schemas, and style guides - into AI-assisted code generation. We evaluate three authoring strategies - Baseline, Sequential, and Agentic - across a diverse corpus of production detections and state-of-the-art LLMs. Our results show that Agentic workflows achieve a 19% improvement in overall similarity score over Baseline approaches, while Sequential workflows attain 87% of Agentic quality at 40x lower token cost. Generated detections excel at TTP matching (99.4%) and syntax validity (95.9%) but struggle with exclusion parity (8.9%). Expert validation on a 22-detection subset confirms strong Spearman correlation between automated metrics and practitioner judgment ($ρ= 0.64$, $p < 0.002$). By integrating seamlessly into standard developer environments, AVDA provides a practical path toward AI-assisted detection engineering with quantified trade-offs between quality, cost, and latency.

AVDA: Autonomous Vibe Detection Authoring for Cybersecurity

Abstract

With the rapid advancement of AI in code generation, cybersecurity detection engineering faces new opportunities to automate traditionally manual processes. Detection authoring - the practice of creating executable logic that identifies malicious activities from security telemetry - is hindered by fragmented code across repositories, duplication, and limited organizational visibility. Current workflows remain heavily manual, constraining both coverage and velocity. In this paper, we introduce AVDA, a framework that leverages the Model Context Protocol (MCP) to automate detection authoring by integrating organizational context - existing detections, telemetry schemas, and style guides - into AI-assisted code generation. We evaluate three authoring strategies - Baseline, Sequential, and Agentic - across a diverse corpus of production detections and state-of-the-art LLMs. Our results show that Agentic workflows achieve a 19% improvement in overall similarity score over Baseline approaches, while Sequential workflows attain 87% of Agentic quality at 40x lower token cost. Generated detections excel at TTP matching (99.4%) and syntax validity (95.9%) but struggle with exclusion parity (8.9%). Expert validation on a 22-detection subset confirms strong Spearman correlation between automated metrics and practitioner judgment (, ). By integrating seamlessly into standard developer environments, AVDA provides a practical path toward AI-assisted detection engineering with quantified trade-offs between quality, cost, and latency.

Paper Structure

This paper contains 20 sections, 5 equations, 3 figures, 10 tables.

Figures (3)

  • Figure 1: AVDA architecture. Detection artifacts flow from organizational repositories through the Data Processing layer, which populates vector and relational stores. The MCP Server exposes these assets to LLM-powered authoring workflows via standardized tools. Detection authors interact through IDE extensions or CLI, with generated code following standard DevOps pipelines to deployment.
  • Figure 2: Comparison of detection authoring workflows.
  • Figure 3: Best-performing configuration per model over time (Agentic workflow).