Table of Contents
Fetching ...

SafeScreen: A Safety-First Screening Framework for Personalized Video Retrieval for Vulnerable Users

Wenzheng Zhao, Madhava Kalyan Gadiputi, Fengpei Yuan

Abstract

Open-domain video platforms offer rich, personalized content that could support health, caregiving, and educational applications, but their engagement-optimized recommendation algorithms can expose vulnerable users to inappropriate or harmful material. These risks are especially acute in child-directed and care settings (e.g., dementia care), where content must satisfy individualized safety constraints before being shown. We introduce SafeScreen, a safety-first video screening framework that retrieves and presents personalized video while enforcing individualized safety constraints. Rather than ranking videos by relevance or popularity, SafeScreen treats safety as a prerequisite and performs sequential approval or rejection of candidate videos through an automated pipeline. SafeScreen integrates three key components: (i) profile-driven extraction of individualized safety criteria, (ii) evidence-grounded assessments via adaptive question generation and multimodal VideoRAG analysis, and (iii) LLM-based decision-making that verifies safety, appropriateness, and relevance before content exposure. This design enables explainable, real-time screening of uncurated video repositories without relying on precomputed safety labels. We evaluate SafeScreen in a dementia-care reminiscence case study using 30 synthetic patient profiles and 90 test queries. Results demonstrate that SafeScreen prioritizes safety over engagement, diverging from YouTube's engagement-optimized rankings in 80-93% of cases, while maintaining high levels of safety coverage, sensibleness, and groundedness, as validated by both LLM-based evaluation and domain experts.

SafeScreen: A Safety-First Screening Framework for Personalized Video Retrieval for Vulnerable Users

Abstract

Open-domain video platforms offer rich, personalized content that could support health, caregiving, and educational applications, but their engagement-optimized recommendation algorithms can expose vulnerable users to inappropriate or harmful material. These risks are especially acute in child-directed and care settings (e.g., dementia care), where content must satisfy individualized safety constraints before being shown. We introduce SafeScreen, a safety-first video screening framework that retrieves and presents personalized video while enforcing individualized safety constraints. Rather than ranking videos by relevance or popularity, SafeScreen treats safety as a prerequisite and performs sequential approval or rejection of candidate videos through an automated pipeline. SafeScreen integrates three key components: (i) profile-driven extraction of individualized safety criteria, (ii) evidence-grounded assessments via adaptive question generation and multimodal VideoRAG analysis, and (iii) LLM-based decision-making that verifies safety, appropriateness, and relevance before content exposure. This design enables explainable, real-time screening of uncurated video repositories without relying on precomputed safety labels. We evaluate SafeScreen in a dementia-care reminiscence case study using 30 synthetic patient profiles and 90 test queries. Results demonstrate that SafeScreen prioritizes safety over engagement, diverging from YouTube's engagement-optimized rankings in 80-93% of cases, while maintaining high levels of safety coverage, sensibleness, and groundedness, as validated by both LLM-based evaluation and domain experts.

Paper Structure

This paper contains 49 sections, 3 figures, 7 tables.

Figures (3)

  • Figure 1: Conceptual comparison between conventional engagement-driven video recommendation systems (top) and SafeScreen, a safety-first personalization framework for reminiscence video retrieval (bottom). SafeScreen reverses the standard optimization objective by prioritizing individual safety constraints and multimodal verification over popularity or crowd-based similarity.
  • Figure 2: Complete SafeScreen framework overview showing the three-stage pipeline: (1) Stage 1 (green): Prefiltering steps including risk detection, risk-aware profile extraction, preference extraction, and candidate video retrieval. The system requests permission for medium/high-risk queries or terminates if permission is denied. (2) Stage 2 (orange): VideoRAG Analysis where an LLM generates patient-specific safety questions based on the extracted profile and query, then VideoRAG analyzes candidate videos to produce evidence-grounded Q/A pairs. (3) Stage 3 (purple): LLM Evaluation implementing sequential safety screening. Videos are evaluated one at a time; the first to pass all safety criteria is selected, while failed videos are rejected immediately. The process continues until an acceptable video is found or all candidates are exhausted. User inputs (cyan) flow through sequential safety verification before video selection.
  • Figure 3: SafeScreen deployment contexts: clinical integration (left) and systematic evaluation (right).