Table of Contents
Fetching ...

CoLoRSMamba: Conditional LoRA-Steered Mamba for Supervised Multimodal Violence Detection

Damith Chamalke Senadeera, Dimitrios Kollias, Gregory Slabaugh

Abstract

Violence detection benefits from audio, but real-world soundscapes can be noisy or weakly related to the visible scene. We present CoLoRSMamba, a directional Video to Audio multimodal architecture that couples VideoMamba and AudioMamba through CLS-guided conditional LoRA. At each layer, the VideoMamba CLS token produces a channel-wise modulation vector and a stabilization gate that adapt the AudioMamba projections responsible for the selective state-space parameters (Delta, B, C), including the step-size pathway, yielding scene-aware audio dynamics without token-level cross-attention. Training combines binary classification with a symmetric AV-InfoNCE objective that aligns clip-level audio and video embeddings. To support fair multimodal evaluation, we curate audio-filtered clip level subsets of the NTU-CCTV and DVD datasets from temporal annotations, retaining only clips with available audio. On these subsets, CoLoRSMamba outperforms representative audio-only, video-only, and multimodal baselines, achieving 88.63% accuracy / 86.24% F1-V on NTU-CCTV and 75.77% accuracy / 72.94% F1-V on DVD. It further offers a favorable accuracy-efficiency tradeoff, surpassing several larger models with fewer parameters and FLOPs.

CoLoRSMamba: Conditional LoRA-Steered Mamba for Supervised Multimodal Violence Detection

Abstract

Violence detection benefits from audio, but real-world soundscapes can be noisy or weakly related to the visible scene. We present CoLoRSMamba, a directional Video to Audio multimodal architecture that couples VideoMamba and AudioMamba through CLS-guided conditional LoRA. At each layer, the VideoMamba CLS token produces a channel-wise modulation vector and a stabilization gate that adapt the AudioMamba projections responsible for the selective state-space parameters (Delta, B, C), including the step-size pathway, yielding scene-aware audio dynamics without token-level cross-attention. Training combines binary classification with a symmetric AV-InfoNCE objective that aligns clip-level audio and video embeddings. To support fair multimodal evaluation, we curate audio-filtered clip level subsets of the NTU-CCTV and DVD datasets from temporal annotations, retaining only clips with available audio. On these subsets, CoLoRSMamba outperforms representative audio-only, video-only, and multimodal baselines, achieving 88.63% accuracy / 86.24% F1-V on NTU-CCTV and 75.77% accuracy / 72.94% F1-V on DVD. It further offers a favorable accuracy-efficiency tradeoff, surpassing several larger models with fewer parameters and FLOPs.

Paper Structure

This paper contains 30 sections, 20 equations, 4 figures, 9 tables.

Figures (4)

  • Figure 1: Our proposed Conditional LoRA Steering (CoLoRS) mechanism to steer any vector given a conditioning parameter.
  • Figure 2: Overview of CoLoRSMamba. (a) Full architecture: The video backbone processes the video input $\mathbf{X}^v$, while audio backbone processes the log-mel spectrogram of $\mathbf{X}^a$ in parallel. At each layer $\ell$, the VideoMamba CLS token $\mathrm{CLS}_\ell^v$ conditions AudioMamba via a Conditional LoRA Steering module. The final video and audio descriptors ($\mathbf{z}_v$, $\mathbf{z}_a$) are concatenated and fed to a binary classifier, while calculating a symmetric AV-InfoNCE loss aligning the two modalities. (b) Bidirectional Mamba block shared by both backbones.
  • Figure 3: Accuracy-efficiency comparison on the DVD benchmark. Marker size is proportional to the number of parameters.
  • Figure 4: Prediction flip analysis on the DVD test split. Audio helps 2.67$\times$ more often than it hurts, yielding a net gain of 35 correctly classified clips.