Table of Contents
Fetching ...

Beyond BMI: Smartphone Body Composition Phenotyping for Cardiometabolic Risk Assessment

Dr Menglian Zhou, Mr Arno Charton, Ms Emily Blanchard, Mr Lawrence Cai, Dr Tracy Giest, Dr Herschel Watkins, Dr Mohamed Bouterfa, Ms Jackie Wasson, Dr Keerthana Natarajan, Mr Aniket Deshpande, Dr Jiening Zhan, Dr Shelten Yuen, Dr Xavi Prieto, Jacqueline Shreibati, Mark Malhotra, Shwetak Patel, Ms Lindsey Sunden, Dr Cathy Speed, Ms Alicia Kokoszka, Dr Aravind Natarajan, Dr Alexandros Pantelopoulos, Dr Ahmed Metwally

Abstract

Body Mass Index (BMI) is a widely accessible but imprecise proxy of cardiometabolic health. While assessing true body composition is superior, gold-standard methods like Dual-Energy X-ray Absorptiometry (DXA) are not scalable. We address this gap by developing and validating "PhotoScan," a method to estimate body composition from smartphone imagery. We pretrained a deep learning model on UK Biobank participants (N=35,323) and fine-tuned on a newly recruited clinical cohort (PhotoBIA cohort, N=677) with diverse ethnicity, age, and body fat distribution, achieving high accuracy against DXA for total body fat percentage (BF%, MAE = 2.15%), Android-to-Gynoid fat ratio (A/G, MAE = 0.11), and visceral-to-subcutaneous fat area ratio (V/S, MAE = 0.09). Generalizability of the model was demonstrated on an independent metabolic health study cohort (MetabolicMosaic cohort, N=132 participants), achieving MAEs of 2.13% for BF%, 0.09 for A/G, and 0.09 for V/S. We then evaluated the clinical utility of these metrics in the MetabolicMosaic cohort by predicting insulin resistance (IR). Adding PhotoScan-derived body composition metrics to baseline demographics model (Age, Sex, BMI) significantly improved insulin resistance classification (Area Under the Receiver Operating Characteristic Curve "AUROC" 76.0% vs 69.2%, DeLong test p=0.002, Net Reclassification Index "NRI" 0.593). Crucially, this accessible smartphone method achieved performance nearly equivalent to adding clinical-grade DXA data to baseline demographics model (AUROC 77.3% vs 69.2%, DeLong test p=0.004, NRI 0.748). These findings demonstrate that smartphone-based phenotyping captures clinically meaningful risk signals missed by BMI and anthropometrics, offering a scalable alternative to DXA for cardiometabolic risk stratification.

Beyond BMI: Smartphone Body Composition Phenotyping for Cardiometabolic Risk Assessment

Abstract

Body Mass Index (BMI) is a widely accessible but imprecise proxy of cardiometabolic health. While assessing true body composition is superior, gold-standard methods like Dual-Energy X-ray Absorptiometry (DXA) are not scalable. We address this gap by developing and validating "PhotoScan," a method to estimate body composition from smartphone imagery. We pretrained a deep learning model on UK Biobank participants (N=35,323) and fine-tuned on a newly recruited clinical cohort (PhotoBIA cohort, N=677) with diverse ethnicity, age, and body fat distribution, achieving high accuracy against DXA for total body fat percentage (BF%, MAE = 2.15%), Android-to-Gynoid fat ratio (A/G, MAE = 0.11), and visceral-to-subcutaneous fat area ratio (V/S, MAE = 0.09). Generalizability of the model was demonstrated on an independent metabolic health study cohort (MetabolicMosaic cohort, N=132 participants), achieving MAEs of 2.13% for BF%, 0.09 for A/G, and 0.09 for V/S. We then evaluated the clinical utility of these metrics in the MetabolicMosaic cohort by predicting insulin resistance (IR). Adding PhotoScan-derived body composition metrics to baseline demographics model (Age, Sex, BMI) significantly improved insulin resistance classification (Area Under the Receiver Operating Characteristic Curve "AUROC" 76.0% vs 69.2%, DeLong test p=0.002, Net Reclassification Index "NRI" 0.593). Crucially, this accessible smartphone method achieved performance nearly equivalent to adding clinical-grade DXA data to baseline demographics model (AUROC 77.3% vs 69.2%, DeLong test p=0.004, NRI 0.748). These findings demonstrate that smartphone-based phenotyping captures clinically meaningful risk signals missed by BMI and anthropometrics, offering a scalable alternative to DXA for cardiometabolic risk stratification.

Paper Structure

This paper contains 32 sections, 11 figures, 9 tables.

Figures (11)

  • Figure 1: Overview of the Body Composition Training Pipeline and Insulin Resistance Model Performance. The PhotoScan model was pre-trained using the UK Biobank cohort and fine-tuned on the PhotoBIA cohort, while the BIA model was trained on the PhotoBIA cohort. Body composition inferences on the MetabolicMosaic cohort (derived from both PhotoScan and BIA models), combined with DXA estimates and anthropometric measurements, were consolidated as feature sets to train independent insulin resistance classifiers. Model performance metrics are summarized in the inset (bottom left). The DeLong test p-values indicate whether the model's AUC differs significantly from the demographics-only baseline. The Net Reclassification Index (NRI) quantifies the improvement in correctly classifying subjects relative to the baseline.
  • Figure 2: Body Composition Model Performance on the PhotoBIA Cohort. A) Comparative accuracy of inferred BF% against ground truth (DXA-derived BF%) for demographics-only model, BIA model, PhotoScan model and BIA and PhotoScan fusion model. B) Comparative accuracy of inferred android to gynoid fat percentage ratio (A/G) and visceral to subcutaneous fat area ratio (V/S) against ground truth (DXA-derived A/G) for demographics-only model and PhotoScan model. C - F) Scatterplot of BF% predictions from C) demographics only model D) BIA model E) PhotoScan model F) BIA-PhotoScan fusion model, against ground truth (DXA-derived BF%)
  • Figure 3: Inference of Body Composition on the MetabolicMosaic Cohort, and the Association with Cardiometabolic Health Metrics. A) Overview of the inference and evaluation pipeline for BF%. Overall, there are 215 visits with both valid PhotoScan and BIA measurements. B) Comparative accuracy of inferred BF% against ground truth (DXA-derived BF%) for demographics-only model, BIA model, PhotoScan model and BIA and PhotoScan fusion model for PhotoBIA Cohort and MetabolicMosaic cohort. C) Comparative accuracy of inferred A/G and V/S against ground truth (DXA-derived A/G and V/S) for demographics-only model and PhotoScan model for PhotoBIA Cohort and MetabolicMosaic cohort. D - G) Scatter Plot of ground truth (DXA-derived BF%) against BF% predictions from D) demographics-only model E) BIA model F) PhotoScan model G) BIA-PhotoScan fusion model.
  • Figure 4: Feature Importance for Insulin Resistance Classification Models with Individual Feature Set and Combinations. Full explanations of each feature set are described in \ref{['tab:feature_importance']} A) Demographics Feature Set B) Anthropometric Feature Set C) DXA-derived Body Composition Feature Set. D) PhotoScan-predicted Body Composition Feature Set. E) Demographics combined with BIA Feature Set. F) Demographics combined with DXA-derived Body Composition Feature Set. G) Demographics combined with PhotoScan-predicted Body Composition Feature Set. H) Demographics combined with Anthropometric Feature Sets.
  • Figure S1: Data Processing Workflow of the MetabolicMosaic Cohort Study.
  • ...and 6 more figures