Table of Contents
Fetching ...

VisionClaw: Always-On AI Agents through Smart Glasses

Xiaoan Liu, DaeHo Lee, Eric J Gonzalez, Mar Gonzalez-Franco, Ryo Suzuki

Abstract

We present VisionClaw, an always-on wearable AI agent that integrates live egocentric perception with agentic task execution. Running on Meta Ray-Ban smart glasses, VisionClaw continuously perceives real-world context and enables in-situ, speech-driven action initiation and delegation via OpenClaw AI agents. Therefore, users can directly execute tasks through the smart glasses, such as adding real-world objects to an Amazon cart, generating notes from physical documents, receiving meeting briefings on the go, creating events from posters, or controlling IoT devices. We evaluate VisionClaw through a controlled laboratory study (N=12) and a longitudinal deployment study (N=5). Results show that integrating perception and execution enables faster task completion and reduces interaction overhead compared to non-always-on and non-agent baselines. Beyond performance gains, deployment findings reveal a shift in interaction: tasks are initiated opportunistically during ongoing activities, and execution is increasingly delegated rather than manually controlled. These results suggest a new paradigm for wearable AI agents, where perception and action are continuously coupled to support situated, hands-free interaction.

VisionClaw: Always-On AI Agents through Smart Glasses

Abstract

We present VisionClaw, an always-on wearable AI agent that integrates live egocentric perception with agentic task execution. Running on Meta Ray-Ban smart glasses, VisionClaw continuously perceives real-world context and enables in-situ, speech-driven action initiation and delegation via OpenClaw AI agents. Therefore, users can directly execute tasks through the smart glasses, such as adding real-world objects to an Amazon cart, generating notes from physical documents, receiving meeting briefings on the go, creating events from posters, or controlling IoT devices. We evaluate VisionClaw through a controlled laboratory study (N=12) and a longitudinal deployment study (N=5). Results show that integrating perception and execution enables faster task completion and reduces interaction overhead compared to non-always-on and non-agent baselines. Beyond performance gains, deployment findings reveal a shift in interaction: tasks are initiated opportunistically during ongoing activities, and execution is increasingly delegated rather than manually controlled. These results suggest a new paradigm for wearable AI agents, where perception and action are continuously coupled to support situated, hands-free interaction.

Paper Structure

This paper contains 32 sections, 10 figures, 5 tables.

Figures (10)

  • Figure 1: System architecture of VisionClaw. The wearable device layer captures audio and video from Meta Ray-Ban smart glasses via the DAT SDK and streams them through an phone app. These always-on streams are sent to the Gemini Live API over a persistent WebSocket connection. Gemini processes the multimodal input and either responds with spoken audio or issues tool calls routed to OpenClaw for execution via dual HTTP and WebSocket channels.
  • Figure 2: Overview of the four tasks used in the study
  • Figure 3: Task completion time. Asterisks next to labels indicate significance from Friedman tests, and bracketed asterisks indicate significance from Wilcoxon signed-rank tests. (* p < .05, ** p < .01.)
  • Figure 4: NASA-TLX. Asterisks next to labels indicate significance from Friedman tests, and bracketed asterisks indicate significance from Wilcoxon signed-rank tests. Lower scores indicate better performance. (* p < .05, ** p < .01)
  • Figure 5: Self-authored questionnaire. Asterisks next to labels indicate significance from Friedman tests, and bracketed asterisks indicate significance from Wilcoxon signed-rank tests. (* p < .05, ** p < .01).
  • ...and 5 more figures