Table of Contents
Fetching ...

HarassGuard: Detecting Harassment Behaviors in Social Virtual Reality with Vision-Language Models

Junhee Lee, Minseok Kim, Hwanjo Heo, Seungwon Woo, Jinwoo Kim

Abstract

Social Virtual Reality (VR) platforms provide immersive social experiences but also expose users to serious risks of online harassment. Existing safety measures are largely reactive, while proactive solutions that detect harassment behavior during an incident often depend on sensitive biometric data, raising privacy concerns. In this paper, we present HarassGuard, a vision-language model (VLM) based system that detects physical harassment in social VR using only visual input. We construct an IRB-approved harassment vision dataset, apply prompt engineering, and fine-tune VLMs to detect harassment behavior by considering contextual information in social VR. Experimental results demonstrate that HarassGuard achieves competitive performance compared to state-of-the-art baselines (i.e., LSTM/CNN, Transformer), reaching an accuracy of up to 88.09% in binary classification and 68.85% in multi-class classification. Notably, HarassGuard matches these baselines while using significantly fewer fine-tuning samples (200 vs. 1,115), offering unique advantages in contextual reasoning and privacy-preserving detection.

HarassGuard: Detecting Harassment Behaviors in Social Virtual Reality with Vision-Language Models

Abstract

Social Virtual Reality (VR) platforms provide immersive social experiences but also expose users to serious risks of online harassment. Existing safety measures are largely reactive, while proactive solutions that detect harassment behavior during an incident often depend on sensitive biometric data, raising privacy concerns. In this paper, we present HarassGuard, a vision-language model (VLM) based system that detects physical harassment in social VR using only visual input. We construct an IRB-approved harassment vision dataset, apply prompt engineering, and fine-tune VLMs to detect harassment behavior by considering contextual information in social VR. Experimental results demonstrate that HarassGuard achieves competitive performance compared to state-of-the-art baselines (i.e., LSTM/CNN, Transformer), reaching an accuracy of up to 88.09% in binary classification and 68.85% in multi-class classification. Notably, HarassGuard matches these baselines while using significantly fewer fine-tuning samples (200 vs. 1,115), offering unique advantages in contextual reasoning and privacy-preserving detection.

Paper Structure

This paper contains 23 sections, 4 figures, 4 tables.

Figures (4)

  • Figure 1: Examples of actions in Aggressive Behavior, Personal Space Violation, and Disruptive Behavior.
  • Figure 2: Participants’ VR usage time and awareness of harassment responses in social platforms.
  • Figure 3: Participants' Likert scale responses from the Scenario 1 (S1): Communication Room (AB: Aggressive Behavior, PSV: Personal Space Violation).
  • Figure 4: Participants' Likert scale responses from the Scenario 2 (S2): Whack-a-pig Room and Scenario 3 (S3): Sling Shot Room (AB: Aggressive Behavior, PSV: Personal Space Violation).