Table of Contents
Fetching ...

FlexAI: A Multi-modal Solution for Delivering Personalized and Adaptive Fitness Interventions

Shivangi Agarwal, Zoya Ghoshal, Bharat Jain, Siddharth Siddharth

Abstract

Personalization of exercise routines is a crucial factor in helping people achieve their fitness goals. Despite this, many contemporary solutions fail to offer real-time, adaptive feedback tailored to an individual's physiological states. Contemporary fitness solutions often rely only on static plans and do not adjust to factors such as a user's pain thresholds, fatigue levels, or form during a workout routine. This work introduces FlexAI, a multi-modal system that integrates computer vision, physiological sensors (heart rate and voice), and the reasoning capabilities of Large Language Models (LLMs) to deliver real-time, personalized workout guidance. FlexAI continuously monitors a user's physical form and level of exertion, among other parameters, to provide dynamic interventions focused on exercise intensity, rest periods, and motivation. To validate our system, we performed a technical evaluation confirming our models' accuracy and quantifying pipeline latency, alongside an expert review where certified trainers validated the correctness of the LLM's interventions. Furthermore, in a controlled study with 25 participants, FlexAI demonstrated significant improvements over a static, non-adaptive control system. With FlexAI, users reported significantly greater enjoyment, a stronger sense of achievement, and significantly lower levels of boredom and frustration. These results indicate that by integrating multi-modal sensing with LLM-driven reasoning, adaptive systems like FlexAI can create a more engaging and effective workout experience. Our work provides a blueprint for integrating multi-modal sensing with LLM-driven reasoning, demonstrating that it is possible to create adaptive coaching systems that are not only more engaging but also demonstrably reliable.

FlexAI: A Multi-modal Solution for Delivering Personalized and Adaptive Fitness Interventions

Abstract

Personalization of exercise routines is a crucial factor in helping people achieve their fitness goals. Despite this, many contemporary solutions fail to offer real-time, adaptive feedback tailored to an individual's physiological states. Contemporary fitness solutions often rely only on static plans and do not adjust to factors such as a user's pain thresholds, fatigue levels, or form during a workout routine. This work introduces FlexAI, a multi-modal system that integrates computer vision, physiological sensors (heart rate and voice), and the reasoning capabilities of Large Language Models (LLMs) to deliver real-time, personalized workout guidance. FlexAI continuously monitors a user's physical form and level of exertion, among other parameters, to provide dynamic interventions focused on exercise intensity, rest periods, and motivation. To validate our system, we performed a technical evaluation confirming our models' accuracy and quantifying pipeline latency, alongside an expert review where certified trainers validated the correctness of the LLM's interventions. Furthermore, in a controlled study with 25 participants, FlexAI demonstrated significant improvements over a static, non-adaptive control system. With FlexAI, users reported significantly greater enjoyment, a stronger sense of achievement, and significantly lower levels of boredom and frustration. These results indicate that by integrating multi-modal sensing with LLM-driven reasoning, adaptive systems like FlexAI can create a more engaging and effective workout experience. Our work provides a blueprint for integrating multi-modal sensing with LLM-driven reasoning, demonstrating that it is possible to create adaptive coaching systems that are not only more engaging but also demonstrably reliable.

Paper Structure

This paper contains 69 sections, 8 figures, 6 tables.

Figures (8)

  • Figure 1: The distributions illustrate how satisfied users are with current routines, how receptive they would be to an AI health coach, and the kind of features they would expect from a comprehensive AI health coach
  • Figure 2: FlexAI's architecture leverages: (1) a sensing module which comprises of cameras, smartwatches, and microphones (2) a processing module which processes sensor data to assess form, pain, physical load, and fatigue (3) an inferencing module which obtains pain labels, and HR and fatigue levels (4) a reasoning module which then leverages LLMs to provide real-time exercise corrections and intensity adjustments (5) a tone-adaptive voice assistant which can deliver in-ear feedback.
  • Figure 3: Demonstration of how the Control and FlexAI systems differ from each other, in terms of form correction, repetition counting, and motivational phrases.
  • Figure 4: Specific triggers lead to their corresponding interventions during an exercise routine including form correction feedback, goal setting, intensity adjustments, rest suggestions, encouragement of accomplishments and milestones, progress updates, and repetition counting announcements.
  • Figure 5: Examples of our hierarchical prompting strategy. (a) An Inter-Exercise prompt for planning rest. (b) An Intra-Exercise prompt for real-time form correction.
  • ...and 3 more figures