Table of Contents
Fetching ...

Unlocking Telemetry Potential: Self-Supervised Learning for Continuous Clinical Electrocardiogram Monitoring

Thomas Kite, Uzair Tahamid Siam, Brian Ayers, Nicholas Houstis, Aaron D Aguirre

TL;DR

This paper applies deep learning to a large volume of unlabeled electrocardiogram (ECG) telemetry signals, which are commonly used for continuous patient monitoring in hospitals but have important differences from the standard, single time-point 12-lead ECG used in many prior machine learning studies.

Abstract

Machine learning (ML) applied to routine patient monitoring within intensive care units (ICUs) has the potential to improve care by providing clinicians with novel insights into each patient's health and expected response to interventions. This paper applies deep learning to a large volume of unlabeled electrocardiogram (ECG) telemetry signals, which are commonly used for continuous patient monitoring in hospitals but have important differences from the standard, single time-point 12-lead ECG used in many prior machine learning studies. We applied self-supervised learning to pretrain a spectrum of deep networks on approximately 147,000 hours of ECG telemetry data. Our approach leverages this dataset to train models that significantly improve performance on four distinct downstream tasks compared with direct supervised learning using labeled data. These pretrained models enable medically useful predictions and estimates in smaller patient cohorts that are typically limited by the scarcity of labels. Notably, we demonstrate that our pretrained networks can continuously annotate ECG telemetry signals, thereby providing monitoring capabilities that are often unavailable due to the requirement for specialized expertise and time-consuming professional annotations.

Unlocking Telemetry Potential: Self-Supervised Learning for Continuous Clinical Electrocardiogram Monitoring

TL;DR

This paper applies deep learning to a large volume of unlabeled electrocardiogram (ECG) telemetry signals, which are commonly used for continuous patient monitoring in hospitals but have important differences from the standard, single time-point 12-lead ECG used in many prior machine learning studies.

Abstract

Machine learning (ML) applied to routine patient monitoring within intensive care units (ICUs) has the potential to improve care by providing clinicians with novel insights into each patient's health and expected response to interventions. This paper applies deep learning to a large volume of unlabeled electrocardiogram (ECG) telemetry signals, which are commonly used for continuous patient monitoring in hospitals but have important differences from the standard, single time-point 12-lead ECG used in many prior machine learning studies. We applied self-supervised learning to pretrain a spectrum of deep networks on approximately 147,000 hours of ECG telemetry data. Our approach leverages this dataset to train models that significantly improve performance on four distinct downstream tasks compared with direct supervised learning using labeled data. These pretrained models enable medically useful predictions and estimates in smaller patient cohorts that are typically limited by the scarcity of labels. Notably, we demonstrate that our pretrained networks can continuously annotate ECG telemetry signals, thereby providing monitoring capabilities that are often unavailable due to the requirement for specialized expertise and time-consuming professional annotations.

Paper Structure

This paper contains 20 sections, 2 equations, 9 figures, 5 tables.

Figures (9)

  • Figure 1: Pretraining using patient-contrastive learning applied to ECG telemetry. (Left) t-SNE plot illustrating the distribution of embeddings corresponding to different participants, highlighting distinct and compact clusters. (Center) NT-Xent loss and (Right) InfoNCE loss across training iterations, demonstrating rapid initial decrease followed by stabilization.
  • Figure 2: Pretrained and subsequently fine-tuned models outperform models trained from scratch. Model performance is plotted as a function of model size and pretraining (vs from scratch) across three prediction tasks: age regression (left), sex classification (center) and intervals regression (right). Models were trained with 100% of available labels. Trend lines emphasize scaling relations.
  • Figure 3: The advantage of pretrained networks grows with model size and scarcity of labels. Performance advantage is measured as % improvement in validation loss compared to models trained from-scratch on the same subset of labels.
  • Figure 4: Classification of atrial fibrillation from continuous ECG telemetry signals using a pretrained neural network. (Top left) Afib classifier performance as a function of label scarcity. (Top right) Probability of Afib (top) estimated across a continuous 15-hour stretch of ECG telemetry together with HR prediction (bottom). Both the Afib probability and heart rate reveal the onset of the cardiac arrhythmia. (Lower panels) the corresponding ECG segments from two moments of interest in the longer time series. The leftmost ECG shows an arrhythmia (Afib), while the segment on the right is a small window where the heart regained normal sinus rhythm.
  • Figure 5: ECG timing intervals reliably change after a first dose of dofetilide (500 mcg capsule), as measured using a neural network designed for continuous telemetry signals. Both QT and PR intervals lengthen in the hours following the dose.
  • ...and 4 more figures