Table of Contents
Fetching ...

SABER: Spatial Attention, Brain, Extended Reality

Tom Bullock, Emily Machniak, You-Jin Kim, Radha Kumaran, Justin Kasowski, Apurv Varshney, Julia Ram, Melissa M. Hernandez, Stina Johansson, Neil M. Dundon, Tobias Höllerer, Barry Giesbrecht

Abstract

Tracking moving objects is a critical skill for many everyday tasks, such as crossing a busy street, driving a car or catching a ball. Attention is a key cognitive function that supports object tracking; however, our understanding of the brain mechanisms that support attention is almost exclusively based on evidence from tasks that present stable objects at fixed locations. Accounts of multiple object tracking are also limited because they are largely based on behavioral data alone and involve tracking objects in a 2D plane. Consequently, the neural mechanisms that enable moment-by-moment tracking of goal-relevant objects remain poorly understood. To address this knowledge gap, we developed SABER (Spatial Attention, Brain, Extended Reality), a new framework for studying the behavioral and neural dynamics of attention to objects moving in 3D. Participants (n=32) completed variants of a task inspired by the popular virtual reality (VR) game, Beat Saber, where they used virtual sabers to strike stationary and moving color-defined target spheres while we recorded electroencephalography (EEG). We first established that standard univariate EEG metrics which are typically used to study spatial attention to static objects presented on 2D screens, can generalize effectively to an immersive VR context involving both static and dynamic 3D stimuli. We then used a computational modeling approach to reconstruct moment-by-moment attention to the locations of stationary and moving objects from oscillatory brain activity, demonstrating the feasibility of precisely tracking attention in a 3D space. These results validate SABER, and provide a foundation for future research that is critical not only for understanding how attention works in the physical world, but is also directly relevant to the development of better VR applications.

SABER: Spatial Attention, Brain, Extended Reality

Abstract

Tracking moving objects is a critical skill for many everyday tasks, such as crossing a busy street, driving a car or catching a ball. Attention is a key cognitive function that supports object tracking; however, our understanding of the brain mechanisms that support attention is almost exclusively based on evidence from tasks that present stable objects at fixed locations. Accounts of multiple object tracking are also limited because they are largely based on behavioral data alone and involve tracking objects in a 2D plane. Consequently, the neural mechanisms that enable moment-by-moment tracking of goal-relevant objects remain poorly understood. To address this knowledge gap, we developed SABER (Spatial Attention, Brain, Extended Reality), a new framework for studying the behavioral and neural dynamics of attention to objects moving in 3D. Participants (n=32) completed variants of a task inspired by the popular virtual reality (VR) game, Beat Saber, where they used virtual sabers to strike stationary and moving color-defined target spheres while we recorded electroencephalography (EEG). We first established that standard univariate EEG metrics which are typically used to study spatial attention to static objects presented on 2D screens, can generalize effectively to an immersive VR context involving both static and dynamic 3D stimuli. We then used a computational modeling approach to reconstruct moment-by-moment attention to the locations of stationary and moving objects from oscillatory brain activity, demonstrating the feasibility of precisely tracking attention in a 3D space. These results validate SABER, and provide a foundation for future research that is critical not only for understanding how attention works in the physical world, but is also directly relevant to the development of better VR applications.

Paper Structure

This paper contains 21 sections, 4 equations, 6 figures.

Figures (6)

  • Figure 1: VR-EEG setup, cognitive tasks and protocol. (a) A participant engaged in the task while wearing an EEG cap and VR headset. (b) Schematic examples of trials in the Static- Single and Multiple conditions. A red target sphere appeared in front of the participant either in isolation or flanked by non-targets and the participant used either saber to strike the target as quickly as possible. (c) Schematic examples of trials in the Dynamic- Single and Multiple conditions. On each trial a target sphere appeared at a distance, either in isolation or flanked by non-targets, and moved towards the participant. They used either saber to intercept the target when it came within reach. (d) Experimental workflow. Curved arrows depict the between-participants counterbalancing of the four main conditions.
  • Figure 2: Inverted Encoding Modeling (IEM) Sequence. The IEM assumes brain responses can be modeled as a set of hypothetical information channels sprague_assaying_2018. The first step is to estimate to what extent a linear combination of predetermined channel responses (the basis set, (a)) captures the underlying structure of the observed EEG data. This process generates regression weights for each electrode and channel, which essentially represent the contribution of each electrode to each location-specific channel (b). Next, these weights are used to estimate channel responses from the observed EEG data (c). Spatially selective responses are then quantified by circularly shifting the CRFs to a common center (0 degrees), folding them at the center and computing the linear slope (d).
  • Figure 3: Behavior. Response times and accuracy for (a) Static-Single (SS) and Static-Multiple (SM) conditions, and (b) Dynamic-Single (DS) and Dynamic-Multiple (DM) conditions. Gray and white lines in boxplots represent group medians and means, respectively. **$p_{null} < .01$, ***$p_{null} < .001$
  • Figure 4: Event-Related Potentials. ERP Waveforms depict averaged contralateral and ipsilateral brain responses to target onsets in the Static-Single (a) and Static-Multiple (b) conditions. Horizontal black bars at the base of each plot mark timepoints where the lateralized responses are statistically different. Topographic maps to the right of the ERP plots show mean activation across posterior scalp regions for the large negative-going deflection averaged across the indicated timepoints. Contralateral and ispilateral electrodes used to compute waveforms are marked with red and black circles, respectively.
  • Figure 5: Alpha Lateralization. Plots show lateralization for (a) Static and (b) Dynamic conditions. Topographic maps show the distribution of alpha across posterior scalp sites related to one example target location (red target, 90°). At the base of each plot, condition-specific color-coded bars indicate timepoints where the lateralization index was significantly different from zero, while grey bars mark time points where lateralization differed significantly between conditions. Additionally, condition-specific color-coded vertical dashed lines indicate the mean saber strike reaction times for each condition.
  • ...and 1 more figures