Table of Contents
Fetching ...

VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation

Temiloluwa Femi-Gege, Matthew Brehmer, Jian Zhao

TL;DR

VisConductor addresses the challenge of authoring and delivering gesture-aware augmented video presentations featuring dynamic data visualizations. It introduces a modular, widget-based system that binds a compact set of expressive gestures to chart animations, foreshadowing, and annotation reveals, with a dedicated presenter view for feedback. Two qualitative studies (N=11 presenters and N=11 audience members) demonstrate that gesture-controlled animation can enhance engagement and narrative clarity, while revealing design trade-offs between spontaneity, coherence, and control. The work suggests practical pathways for future gesture-aware presentation tools and provides insights into how affective data storytelling can be supported in remote communication contexts.

Abstract

Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (N = 11) and audience members (N = 11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools.

VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation

TL;DR

VisConductor addresses the challenge of authoring and delivering gesture-aware augmented video presentations featuring dynamic data visualizations. It introduces a modular, widget-based system that binds a compact set of expressive gestures to chart animations, foreshadowing, and annotation reveals, with a dedicated presenter view for feedback. Two qualitative studies (N=11 presenters and N=11 audience members) demonstrate that gesture-controlled animation can enhance engagement and narrative clarity, while revealing design trade-offs between spontaneity, coherence, and control. The work suggests practical pathways for future gesture-aware presentation tools and provides insights into how affective data storytelling can be supported in remote communication contexts.

Abstract

Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (N = 11) and audience members (N = 11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools.

Paper Structure

This paper contains 23 sections, 7 figures.

Figures (7)

  • Figure 1: Frames from early low-fidelity video prototypes of animated charts composited over a presenter gesticulating.
  • Figure 2: The presenter interface of VisConductor consists of: (A) the Presentation Preview, (B) the Timeline Slider, (C) the Storyline Tab, (D) the Widget Settings Tab, (E) the Widget List, and a palette of (F)Chart Widgets, (G)Annotation Widgets, and (H)Gesture Widgets.
  • Figure 3: An overview of widget parametrization in VisConductor: (A1)Gesture Widget settings for gesture type, recognition duration, which hand to recognize for one-handed gestures, and which operation the gesture will trigger: (A2)Selection, (A3)Foreshadowing, (A4)Playback, or (A5)Annotation; (B)Annotation Widget settings include those for specifying annotation type, color, opacity, reveal duration, and reveal easing; (C1)Chart Widget settings include those for ingesting tabular data and binding fields to position, size, and color scales used in the chart, while (C2)Chart Widget animation preferences allow for the selection of keyframes, mark opacities, and foreshadowing design.
  • Figure 4: Four moments in a presentation as seen from the Presenter Preview, exemplify visual feedback for four types of gesture: (A) an Open Hand gesture reveals a text annotation; (B) a Pointing gesture highlights specific data marks within the scatterplot; (C) a Rectangular Framing gesture foreshadows the trajectories of data marks appearing within that enclosed region; and (D) a continuous Dialling gesture controls the playback of an animation corresponding to changes in the data over time.
  • Figure 5: Visual foreshadowing applied to the two Chart Widgets. (A) Scatterplot position foreshadowing, showing the initial and final position of the mark(s). (B) Scatterplot trajectory foreshadowing, showing the path of one or more data marks over time. (C) Bar chart race position foreshadowing, highlighting the future position of a bar. (D) Bar chart race trajectory foreshadowing, showing an ephemeral bump chart of the bar's future positions; e.g., Peru's rise from 6th to 5th place over three time steps.
  • ...and 2 more figures