Table of Contents
Fetching ...

VACP: Visual Analytics Context Protocol

Tobias Stähle, Péter Ferenc Gyarmati, Thilo Spinner, Rita Sevastjanova, Dominik Moritz, Mennatallah El-Assady

Abstract

The rise of AI agents introduces a fundamental shift in Visual Analytics (VA), in which agents act as a new user group. Current agentic approaches - based on computer vision and raw DOM access - fail to perform VA tasks accurately and efficiently. This paper introduces the Visual Analytics Context Protocol (VACP), a framework designed to make VA applications "agent-ready" that extends generic protocols by explicitly exposing application state, available interactions, and mechanisms for direct execution. To support our context protocol, we contribute a formal specification of AI agent requirements and knowledge representations in VA interfaces. We instantiate VACP as a library compatible with major visualization grammars and web frameworks, enabling augmentation of existing systems and the development of new ones. Our evaluation across representative VA tasks demonstrates that VACP-enabled agents achieve higher success rates in interface interpretation and execution compared to current agentic approaches, while reducing token consumption and latency. VACP closes the gap between human-centric VA interfaces and machine perceivability, ensuring agents can reliably act as collaborative users in VA systems.

VACP: Visual Analytics Context Protocol

Abstract

The rise of AI agents introduces a fundamental shift in Visual Analytics (VA), in which agents act as a new user group. Current agentic approaches - based on computer vision and raw DOM access - fail to perform VA tasks accurately and efficiently. This paper introduces the Visual Analytics Context Protocol (VACP), a framework designed to make VA applications "agent-ready" that extends generic protocols by explicitly exposing application state, available interactions, and mechanisms for direct execution. To support our context protocol, we contribute a formal specification of AI agent requirements and knowledge representations in VA interfaces. We instantiate VACP as a library compatible with major visualization grammars and web frameworks, enabling augmentation of existing systems and the development of new ones. Our evaluation across representative VA tasks demonstrates that VACP-enabled agents achieve higher success rates in interface interpretation and execution compared to current agentic approaches, while reducing token consumption and latency. VACP closes the gap between human-centric VA interfaces and machine perceivability, ensuring agents can reliably act as collaborative users in VA systems.

Paper Structure

This paper contains 25 sections, 9 figures.

Figures (9)

  • Figure 1: Overview of knowledge representation layers in VA applications, agents' accessibility, and VACP functions for perceiving the VA interface at each level and supporting interactions. Each layer abstracts the knowledge, including the data to be analyzed, data encoding, and interface functionalities defined by production logic and design decisions.
  • Figure 2: Example VACP application-state representation. The capability graph captures semantic structure and interaction-relevant relations; the state snapshot stores currently active values under the same stable references.
  • Figure 3: Overview of the Evaluation pipeline. where the left (a) shows the overall task execution and evaluation workflow, and the right (b) illustrates the agent setup and the different evaluated scenarios \ref{['lmprinciple:S1']} -- \ref{['lmprinciple:S4']}.
  • Figure 4: Overview of VA interfaces of the defined VA use cases.
  • Figure 5: Workflow of Claude Sonnet 4.5 solving Task UC1A in scenario \ref{['lmprinciple:S4']}. After inspecting a screenshot and the VACP semantic state, the agent adjusts the selected year and verifies the change in the UI. It then queries the data, identifies "Japan," confirms this by hovering over Japan’s data point, and finally returns "Japan" as the answer.
  • ...and 4 more figures