Table of Contents
Fetching ...

GUIDE: Resolving Domain Bias in GUI Agents through Real-Time Web Video Retrieval and Plug-and-Play Annotation

Rui Xie, Zhi Gao, Chenrui Shi, Zirui Shang, Lu Chen, Qing Li

Abstract

Large vision-language models have endowed GUI agents with strong general capabilities for interface understanding and interaction. However, due to insufficient exposure to domain-specific software operation data during training, these agents exhibit significant domain bias - they lack familiarity with the specific operation workflows (planning) and UI element layouts (grounding) of particular applications, limiting their real-world task performance. In this paper, we present GUIDE (GUI Unbiasing via Instructional-Video Driven Expertise), a training-free, plug-and-play framework that resolves GUI agent domain bias by autonomously acquiring domain-specific expertise from web tutorial videos through a retrieval-augmented automated annotation pipeline. GUIDE introduces two key innovations. First, a subtitle-driven Video-RAG pipeline unlocks video semantics through subtitle analysis, performing progressive three-stage retrieval - domain classification, topic extraction, and relevance matching - to identify task-relevant tutorial videos. Second, a fully automated annotation pipeline built on an inverse dynamics paradigm feeds consecutive keyframes enhanced with UI element detection into VLMs, inferring the required planning and grounding knowledge that are injected into the agent's corresponding modules to address both manifestations of domain bias. Extensive experiments on OSWorld demonstrate GUIDE's generality as a plug-and-play component for both multi-agent systems and single-model agents. It consistently yields over 5% improvements and reduces execution steps - without modifying any model parameters or architecture - validating GUIDE as an architecture-agnostic enhancement to bridge GUI agent domain bias.

GUIDE: Resolving Domain Bias in GUI Agents through Real-Time Web Video Retrieval and Plug-and-Play Annotation

Abstract

Large vision-language models have endowed GUI agents with strong general capabilities for interface understanding and interaction. However, due to insufficient exposure to domain-specific software operation data during training, these agents exhibit significant domain bias - they lack familiarity with the specific operation workflows (planning) and UI element layouts (grounding) of particular applications, limiting their real-world task performance. In this paper, we present GUIDE (GUI Unbiasing via Instructional-Video Driven Expertise), a training-free, plug-and-play framework that resolves GUI agent domain bias by autonomously acquiring domain-specific expertise from web tutorial videos through a retrieval-augmented automated annotation pipeline. GUIDE introduces two key innovations. First, a subtitle-driven Video-RAG pipeline unlocks video semantics through subtitle analysis, performing progressive three-stage retrieval - domain classification, topic extraction, and relevance matching - to identify task-relevant tutorial videos. Second, a fully automated annotation pipeline built on an inverse dynamics paradigm feeds consecutive keyframes enhanced with UI element detection into VLMs, inferring the required planning and grounding knowledge that are injected into the agent's corresponding modules to address both manifestations of domain bias. Extensive experiments on OSWorld demonstrate GUIDE's generality as a plug-and-play component for both multi-agent systems and single-model agents. It consistently yields over 5% improvements and reduces execution steps - without modifying any model parameters or architecture - validating GUIDE as an architecture-agnostic enhancement to bridge GUI agent domain bias.

Paper Structure

This paper contains 67 sections, 1 equation, 8 figures, 7 tables.

Figures (8)

  • Figure 1: Overview of GUIDE.(1) A Retrieval Agent filters YouTube candidates via three subtitle-driven stages to select top-$K$ videos. (2) An Annotation Agent applies $f_{\mathrm{IDM}}$ (Eq. \ref{['eq:inverse-dynamics']}) on keyframe pairs $s_t$/$s_{t+1}$ with UI element graphs $E_t$/$E_{t+1}$, topic $T_{\mathrm{topic}}$, and subtitle context $C_{\mathrm{sub}}$, producing planning and grounding knowledge. (3) Knowledge is injected into the GUI Agent in a plug-and-play manner---supporting multi-agent (Mode A) and single-model (Mode B) architectures.
  • Figure 2: Subtitle-driven Video-RAG pipeline. From 50+ YouTube candidates, a metadata pre-filter removes outliers, then three subtitle-driven stages progressively narrow results: (a) domain classification filters non-GUI content, (b) topic extraction (title $+$ subtitle $\to$ semantic descriptor), and (c) dual-anchored relevance matching (topic as primary anchor, score 0--1). Final top-$K$ ($K{\le}2$) videos proceed to annotation (\ref{['fig:annotation']}).
  • Figure 3: Fully automated annotation pipeline. Retrieved videos (\ref{['fig:rag-pipeline']}) are converted into structured knowledge through three phases. (a)Perception Frontend: Whisper ASR $\to$ keyframe extraction (MOG2) $\to$ OmniParser omniparser UI element graphs $E_t$. (b)Inverse Dynamics Inference: $f_{\mathrm{IDM}}$ (Eq. \ref{['eq:inverse-dynamics']}) takes keyframe pairs ($s_t, s_{t+1}$), element graphs ($E_t, E_{t+1}$), topic $T_{\mathrm{topic}}$, and subtitle context $C_{\mathrm{sub}}$ to produce structured annotations. (c)Knowledge Decomposition: per-frame annotations are aggregated and decomposed into planning and grounding knowledge.
  • Figure 4: Qualitative example on a GIMP contrast-adjustment task (task: "make the picture's contrast stronger"). (a) Without domain knowledge, the agent would default to the "Image" menu ($\times$); planning knowledge from a retrieved tutorial redirects it to the correct "Colors" menu (1$\to$2). (b)Grounding knowledge provides a visual description of the Contrast slider, enabling precise identification among visually similar controls.
  • Figure 5: Human evaluation of the Guide annotation pipeline. Each stage is independently evaluated by 3 annotators on 300 randomly sampled videos. (a) Stage 1: GUI domain classification on 300 candidate videos (pre-filter), achieving 100% precision with zero non-GUI contamination. (b) Stage 2: topic extraction accuracy on a separate set of 300 confirmed GUI videos (post-filter), with 96% of topics rated acceptable.
  • ...and 3 more figures