Table of Contents
Fetching ...

QVAD: A Question-Centric Agentic Framework for Efficient and Training-Free Video Anomaly Detection

Lokman Bekit, Hamza Karim, Nghia T Nguyen, Yasin Yilmaz

Abstract

Video Anomaly Detection (VAD) is a fundamental challenge in computer vision, particularly due to the open-set nature of anomalies. While recent training-free approaches utilizing Vision-Language Models (VLMs) have shown promise, they typically rely on massive, resource-intensive foundation models to compensate for the ambiguity of static prompts. We argue that the bottleneck in VAD is not necessarily model capacity, but rather the static nature of inquiry. We propose QVAD, a question-centric agentic framework that treats VLM-LLM interaction as a dynamic dialogue. By iteratively refining queries based on visual context, our LLM agent guides smaller VLMs to produce high-fidelity captions and precise semantic reasoning without parameter updates. This ``prompt-updating" mechanism effectively unlocks the latent capabilities of lightweight models, enabling state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal using a fraction of the parameters required by competing methods. We further demonstrate exceptional generalizability on the single-scene ComplexVAD dataset. Crucially, QVAD achieves high inference speeds with minimal memory footprints, making advanced VAD capabilities deployable on resource-constrained edge devices.

QVAD: A Question-Centric Agentic Framework for Efficient and Training-Free Video Anomaly Detection

Abstract

Video Anomaly Detection (VAD) is a fundamental challenge in computer vision, particularly due to the open-set nature of anomalies. While recent training-free approaches utilizing Vision-Language Models (VLMs) have shown promise, they typically rely on massive, resource-intensive foundation models to compensate for the ambiguity of static prompts. We argue that the bottleneck in VAD is not necessarily model capacity, but rather the static nature of inquiry. We propose QVAD, a question-centric agentic framework that treats VLM-LLM interaction as a dynamic dialogue. By iteratively refining queries based on visual context, our LLM agent guides smaller VLMs to produce high-fidelity captions and precise semantic reasoning without parameter updates. This ``prompt-updating" mechanism effectively unlocks the latent capabilities of lightweight models, enabling state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal using a fraction of the parameters required by competing methods. We further demonstrate exceptional generalizability on the single-scene ComplexVAD dataset. Crucially, QVAD achieves high inference speeds with minimal memory footprints, making advanced VAD capabilities deployable on resource-constrained edge devices.

Paper Structure

This paper contains 40 sections, 11 equations, 7 figures, 13 tables, 3 algorithms.

Figures (7)

  • Figure 1: Overview of the proposed QVAD framework. The VLM and LLM agents engage in an iterative dialogue, where the LLM generates a query $q_t$ conditioned on the captions $C_t$ produced by the VLM and directs it back to the VLM. This feedback loop progressively refines their shared understanding of the scene
  • Figure 2: Detailed architecture of the proposed QVAD framework.
  • Figure 3: Qualitative comparison of Static vs. Dynamic Prompting. In Turn 0 (standard VLM prompting), the model captures general scene dynamics but misses fine-grained semantic cues, leading to False Negatives. In Turn 1 or on Turn 2, the QVAD Agent hypothesizes a potential anomaly and generates a targeted query, correcting the prediction without parameter updates.
  • Figure 4: Examples of anomaly scores and reasoning for videos from UCF-Crime and XD-Violence test set.
  • Figure 5: Exact anomaly scoring criteria for ComplexVAD.
  • ...and 2 more figures