Table of Contents
Fetching ...

Follow Your Heart: Landmark-Guided Transducer Pose Scoring for Point-of-Care Echocardiography

Zaiyang Guo, Jessie N. Dong, Filippos Bellos, Jilei Hao, Emily J. MacKay, Trevor Chan, Shir Goldfinger, Sethu Reddy, Steven Vance, Jason J. Corso, Alison M. Pouch

Abstract

Point-of-care transthoracic echocardiography (TTE) makes it possible to assess a patient's cardiac function in almost any setting. A critical step in the TTE exam is acquisition of the apical 4-chamber (A4CH) view, which is used to evaluate clinically impactful measurements such as left ventricular ejection fraction (LVEF). However, optimizing transducer pose for high-quality image acquisition and subsequent measurement is a challenging task, particularly for novice users. In this work, we present a multi-task network that provides feedback cues for A4CH view acquisition and automatically estimates LVEF in high-quality A4CH images. The network cascades a transducer pose scoring module and an uncertainty-aware LV landmark detector with automated LVEF estimation. A strength is that network training and inference do not require cumbersome or costly setups for transducer position tracking. We evaluate performance on point-of-care TTE data acquired with a spatially dense "sweep" protocol around the optimal A4CH view. The results demonstrate the network's ability to determine when the transducer pose is on target, close to target, or far from target based on the images alone, while generating visual landmark cues that guide anatomical interpretation and orientation. In conclusion, we demonstrate a promising strategy to provide guidance for A4CH view acquisition, which may be useful when deploying point-of-care TTE in limited resource settings.

Follow Your Heart: Landmark-Guided Transducer Pose Scoring for Point-of-Care Echocardiography

Abstract

Point-of-care transthoracic echocardiography (TTE) makes it possible to assess a patient's cardiac function in almost any setting. A critical step in the TTE exam is acquisition of the apical 4-chamber (A4CH) view, which is used to evaluate clinically impactful measurements such as left ventricular ejection fraction (LVEF). However, optimizing transducer pose for high-quality image acquisition and subsequent measurement is a challenging task, particularly for novice users. In this work, we present a multi-task network that provides feedback cues for A4CH view acquisition and automatically estimates LVEF in high-quality A4CH images. The network cascades a transducer pose scoring module and an uncertainty-aware LV landmark detector with automated LVEF estimation. A strength is that network training and inference do not require cumbersome or costly setups for transducer position tracking. We evaluate performance on point-of-care TTE data acquired with a spatially dense "sweep" protocol around the optimal A4CH view. The results demonstrate the network's ability to determine when the transducer pose is on target, close to target, or far from target based on the images alone, while generating visual landmark cues that guide anatomical interpretation and orientation. In conclusion, we demonstrate a promising strategy to provide guidance for A4CH view acquisition, which may be useful when deploying point-of-care TTE in limited resource settings.

Paper Structure

This paper contains 9 sections, 1 equation, 3 figures, 2 tables.

Figures (3)

  • Figure 1: A. Overview of the multi-task network. TTE video frames are first passed through the landmark detector. The images and predicted landmarks then pass through the pose scoring module to be classified as green, yellow, or red. Optimal "green" clips are then passed to an automated LVEF estimator. B. Each TTE sweep starts with the transducer in the optimal A4CH pose and drifts away from the optimal pose. Ground truth pose categories (green, yellow, red) are manually assigned and mapped to a numeric pose score.
  • Figure 2: A. Predicted and ground truth landmarks B. Predicted landmarks displayed with prediction uncertainty C. Key landmark output passed on to pose scoring module
  • Figure 3: Overall confusion matrix and corresponding examples of misclassified TTE frames using images and landmarks.