Table of Contents
Fetching ...

Towards GUI Agents: Vision-Language Diffusion Models for GUI Grounding

Shrinidhi Kumbhar, Haofu Liao, Srikar Appalaraju, Kunwar Yashraj Singh

Abstract

Autoregressive (AR) vision-language models (VLMs) have long dominated multimodal understanding, reasoning, and graphical user interface (GUI) grounding. Recently, discrete diffusion vision-language models (DVLMs) have shown strong performance in multimodal reasoning, offering bidirectional attention, parallel token generation, and iterative refinement. However, their potential for GUI grounding remains unexplored. In this work, we evaluate whether discrete DVLMs can serve as a viable alternative to AR models for GUI grounding. We adapt LLaDA-V for single-turn action and bounding-box prediction, framing the task as text generation from multimodal input. To better capture the hierarchical structure of bounding-box geometry, we propose a hybrid masking schedule that combines linear and deterministic masking, improving grounding accuracy by up to 6.1 points in Step Success Rate (SSR) over the GUI-adapted LLaDA-V trained with linear masking. Evaluations on four datasets spanning web, desktop, and mobile interfaces show that the adapted diffusion model with hybrid masking consistently outperforms the linear-masked variant and performs competitively with autoregressive counterparts despite limited pretraining. Systematic ablations reveal that increasing diffusion steps, generation length, and block length improves accuracy but also increases latency, with accuracy plateauing beyond a certain number of diffusion steps. Expanding the training data with diverse GUI domains further reduces latency by about 1.3 seconds and improves grounding accuracy by an average of 20 points across benchmarks. These results demonstrate that discrete DVLMs are a promising modeling framework for GUI grounding and represent an important step toward diffusion-based GUI agents.

Towards GUI Agents: Vision-Language Diffusion Models for GUI Grounding

Abstract

Autoregressive (AR) vision-language models (VLMs) have long dominated multimodal understanding, reasoning, and graphical user interface (GUI) grounding. Recently, discrete diffusion vision-language models (DVLMs) have shown strong performance in multimodal reasoning, offering bidirectional attention, parallel token generation, and iterative refinement. However, their potential for GUI grounding remains unexplored. In this work, we evaluate whether discrete DVLMs can serve as a viable alternative to AR models for GUI grounding. We adapt LLaDA-V for single-turn action and bounding-box prediction, framing the task as text generation from multimodal input. To better capture the hierarchical structure of bounding-box geometry, we propose a hybrid masking schedule that combines linear and deterministic masking, improving grounding accuracy by up to 6.1 points in Step Success Rate (SSR) over the GUI-adapted LLaDA-V trained with linear masking. Evaluations on four datasets spanning web, desktop, and mobile interfaces show that the adapted diffusion model with hybrid masking consistently outperforms the linear-masked variant and performs competitively with autoregressive counterparts despite limited pretraining. Systematic ablations reveal that increasing diffusion steps, generation length, and block length improves accuracy but also increases latency, with accuracy plateauing beyond a certain number of diffusion steps. Expanding the training data with diverse GUI domains further reduces latency by about 1.3 seconds and improves grounding accuracy by an average of 20 points across benchmarks. These results demonstrate that discrete DVLMs are a promising modeling framework for GUI grounding and represent an important step toward diffusion-based GUI agents.

Paper Structure

This paper contains 39 sections, 7 equations, 5 figures, 8 tables.

Figures (5)

  • Figure 1: Overview of Hybrid Masking Adaptation of LLaDA-V for GUI Grounding. (a) The adapted framework takes a natural-language instruction and a GUI screenshot (either from web, desktop, or mobile interfaces) as input. LLaDA-V trained with the linear masking predicts the action type, optional type_in text, and anchor coordinates $(x_1, y_1)$. LLaDA-V model trained with the full deterministic masking predicts the remaining bounding-box coordinates $(x_2, y_2)$ conditioned on the image, instruction, and anchor. (b) Linear Masking Phase: action and anchor tokens in the response are randomly masked according to the linear masking during the forward corruption process during training for coarse grounding during training. (c) Deterministic Masking Phase: all response tokens are fully masked in the forward corruption process during training, and LLaDA-V predicts the bounding-box extent during denoising.
  • Figure 2: The above figure shows an instance where LLaDA-V 8B trained with linear and full deterministic masking provides a more accurate target bounding box and action prediction compared to the model trained with linear masking. The green bounding box is the ground truth and the red one is the predicted. Both models receive image and natural language instruction and produce action type and target bounding as text, visualised on the GUI image.
  • Figure 3: GUI Data Scaling behavior of LLaDA-V 8B trained with Linear Masking: Comparison between LLaDA-V 8B trained on 7k web GUI samples from Mind2Web and 120k mobile, web and desktop GUI samples across four GUI grounding datasets. M2W: Mind2Web, SWT: ScreenSpot-Web-Text, SWI: ScreenSpot-Web-Icon, VWA: Visual Web Arena. Left plot shows Step Sucess Rate (SSR), the center plot shows the number of Converged Steps, and right shows average Inference Latency measured in seconds. Training with large-scale GUI multi-domain data improves SSR, reduces the number of Converged Steps required to produce a highly confident output while reducing Inference Latency, demonstrating better generalization and efficiency across GUI domains.
  • Figure 4: Effect of data annotation quality on grounding accuracy. The figure compares predictions from LLaDA-V trained with default linear masking using the Mind2Web train split with and without cropped images and OCR-based annotations. The top example, trained without OCR text-based annotations and cropping, produces an inaccurate bounding box due to inconsistent icon-level targets and high-resolution inputs. The bottom example, trained with cropped images and OCR-guided text annotations, provides more stable supervision, allowing the model to correctly localize the target element. The green bounding box is the ground truth and the red one is the prediction.
  • Figure 5: Effect of hybrid masking on bounding-box accuracy. The figure compares predictions from LLaDA-V trained with default linear masking (top) and with the proposed hybrid masking that combines linear and deterministic full masking (bottom). The linear-masked model correctly predicts the action type but generates an inaccurate bounding box, missing the target region. In contrast, the hybrid-masked model, guided by conditional refinement between anchor and extent coordinates, produces a precise bounding box that accurately localizes the target element. The green bounding box is the ground truth and the red one is the prediction.