Table of Contents
Fetching ...

Designing Fatigue-Aware VR Interfaces via Biomechanical Models

Harshitha Voleti, Charalambos Poullis

Abstract

Prolonged mid-air interaction in virtual reality (VR) causes arm fatigue and discomfort, negatively affecting user experience. Incorporating ergonomic considerations into VR user interface (UI) design typically requires extensive human-in-the-loop evaluation. Although biomechanical models have been used to simulate human behavior in HCI tasks, their application as surrogate users for ergonomic VR UI design remains underexplored. We propose a hierarchical reinforcement learning framework that leverages biomechanical user models to evaluate and optimize VR interfaces for mid-air interaction. A motion agent is trained to perform button-press tasks in VR under sequential conditions, using realistic movement strategies and estimating muscle-level effort via a validated three-compartment control with recovery (3CC-r) fatigue model. The simulated fatigue output serves as feedback for a UI agent that optimizes UI element layout via reinforcement learning (RL) to minimize fatigue. We compare the RL-optimized layout against a manually-designed centered baseline and a Bayesian optimized baseline. Results show that fatigue trends from the biomechanical model align with human user data. Moreover, the RL-optimized layout using simulated fatigue feedback produced significantly lower perceived fatigue in a follow-up human study. We further demonstrate the framework's extensibility via a simulated case study on longer sequential tasks with non-uniform interaction frequencies. To our knowledge, this is the first work using simulated biomechanical muscle fatigue as a direct optimization signal for VR UI layout design. Our findings highlight the potential of biomechanical user models as effective surrogate tools for ergonomic VR interface design, enabling efficient early-stage iteration with less reliance on extensive human participation.

Designing Fatigue-Aware VR Interfaces via Biomechanical Models

Abstract

Prolonged mid-air interaction in virtual reality (VR) causes arm fatigue and discomfort, negatively affecting user experience. Incorporating ergonomic considerations into VR user interface (UI) design typically requires extensive human-in-the-loop evaluation. Although biomechanical models have been used to simulate human behavior in HCI tasks, their application as surrogate users for ergonomic VR UI design remains underexplored. We propose a hierarchical reinforcement learning framework that leverages biomechanical user models to evaluate and optimize VR interfaces for mid-air interaction. A motion agent is trained to perform button-press tasks in VR under sequential conditions, using realistic movement strategies and estimating muscle-level effort via a validated three-compartment control with recovery (3CC-r) fatigue model. The simulated fatigue output serves as feedback for a UI agent that optimizes UI element layout via reinforcement learning (RL) to minimize fatigue. We compare the RL-optimized layout against a manually-designed centered baseline and a Bayesian optimized baseline. Results show that fatigue trends from the biomechanical model align with human user data. Moreover, the RL-optimized layout using simulated fatigue feedback produced significantly lower perceived fatigue in a follow-up human study. We further demonstrate the framework's extensibility via a simulated case study on longer sequential tasks with non-uniform interaction frequencies. To our knowledge, this is the first work using simulated biomechanical muscle fatigue as a direct optimization signal for VR UI layout design. Our findings highlight the potential of biomechanical user models as effective surrogate tools for ergonomic VR interface design, enabling efficient early-stage iteration with less reliance on extensive human participation.

Paper Structure

This paper contains 51 sections, 7 equations, 7 figures, 2 tables.

Figures (7)

  • Figure 1: Overview of the proposed hierarchical framework. A high-level UI agent proposes discrete interface layouts and evaluates them using cumulative fatigue feedback from a simulated user. Each candidate layout is instantiated in the Unity-based VR application through the SIM2VR task module. Rendered observations are provided to the simulated user's perception module, which is fed into a learned control policy that outputs muscle activation signals. A low-level motion agent controls a biomechanical model to execute the interaction sequence under the proposed layout. Muscle activations are logged during execution and converted into fatigue estimates. The cumulative fatigue over the completed sequence is returned to the UI agent as a reward signal, together with the resulting layout state, to guide subsequent layout optimization.
  • Figure 2: Overview of the motion agent used to simulate interaction. The motion agent generates muscle control signals using an RL policy and interacts with the VR application, which updates the scene and returns rendered observations, reward, and state information. Muscle activations produced during task execution are logged and later converted into fatigue estimates, which are used to evaluate interface layouts.
  • Figure 4: Overview of the UI agent responsible for interface layout optimization. The agent proposes discrete grid-based button layouts, which are instantiated in the VR environment and evaluated by the motion agent. Aggregated fatigue is returned as a reward signal, along with the next state of the UI agent defined by the grid coordinates of all button positions.
  • Figure 5: UI configurations evaluated in the study. (a) RL-based layout optimized using biomechanical fatigue feedback, (b) Bayesian optimization (BO) layout minimizing accumulated effort, and (c) static layout with centrally placed buttons.
  • Figure 6: NASA-TLX workload ratings across the three UI configurations. Bars show mean scores for each NASA-TLX subscale, aggregated across participants. Lower values indicate lower perceived workload.
  • ...and 2 more figures