Table of Contents
Fetching ...

Foundation Model-guided Iteratively Prompting and Pseudo-Labeling for Partially Labeled Medical Image Segmentation

Qiaochu Zhao, Wei Wei, David Horowitz, Richard Bakst, Yading Yuan

Abstract

Automated medical image segmentation has achieved remarkable progress with fully labeled data. However, site-specific clinical priorities and the high cost of manual annotation often yield scans with only a subset of organs labeled, leading to the partially labeled problem that degrades performance. To address this issue, we propose IPnP, an Iteratively Prompting and Pseudo-labeling framework, for partially labeled medical image segmentation. IPnP iteratively generates and refines pseudo-labels for unlabeled organs through collaboration between a trainable segmentation network (specialist) and a frozen foundation model (generalist), progressively recovering full-organ supervision. On the public dataset AMOS with the simulated partial-label setting, IPnP consistently improves segmentation performance over prior methods and approaches the performance of the fully labeled reference. We further evaluate on a private, partially labeled dataset of 210 head-and-neck cancer patients and demonstrate our effectiveness in real-world clinical settings.

Foundation Model-guided Iteratively Prompting and Pseudo-Labeling for Partially Labeled Medical Image Segmentation

Abstract

Automated medical image segmentation has achieved remarkable progress with fully labeled data. However, site-specific clinical priorities and the high cost of manual annotation often yield scans with only a subset of organs labeled, leading to the partially labeled problem that degrades performance. To address this issue, we propose IPnP, an Iteratively Prompting and Pseudo-labeling framework, for partially labeled medical image segmentation. IPnP iteratively generates and refines pseudo-labels for unlabeled organs through collaboration between a trainable segmentation network (specialist) and a frozen foundation model (generalist), progressively recovering full-organ supervision. On the public dataset AMOS with the simulated partial-label setting, IPnP consistently improves segmentation performance over prior methods and approaches the performance of the fully labeled reference. We further evaluate on a private, partially labeled dataset of 210 head-and-neck cancer patients and demonstrate our effectiveness in real-world clinical settings.

Paper Structure

This paper contains 14 sections, 4 equations, 5 figures, 1 table.

Figures (5)

  • Figure 1: Visualization of partially supervised segmentation: (a) full reference mask (evaluation only); (b) partial-label mask as supervision in training; (c) Output of the nnU-Net baseline Isensee2021; (d) Output of the proposed IPnP framework.
  • Figure 2: Overview of the proposed IPnP framework.
  • Figure 3: Qualitative visualization of pseudo-labels in the IPnP framework. The first column shows the ground truth, while columns 2–5 show the pseudo-labels generated after epoch 50, epoch 150, epoch 350, and epoch 450, respectively.
  • Figure 4: Qualitative comparison between methods. The first column shows the ground truth, and the second to fifth columns correspond to Full Supervision (Full Labels), Full Supervision (Partial Labels), Partial Supervision (Partial Labels), and IPnP (Partial Labels), respectively.
  • Figure 5: Segmentation performance of IPnP and other methods on HnN dataset, reported in DSC (%). The numbers in parentheses indicate the counts of labeled organs in the training, validation, and testing sets, respectively, for each organ.