Table of Contents
Fetching ...

Unlocking Positive Transfer in Incrementally Learning Surgical Instruments: A Self-reflection Hierarchical Prompt Framework

Yu Zhu, Kang Li, Zheng Li, Pheng-Ann Heng

Abstract

To continuously enhance model adaptability in surgical video scene parsing, recent studies incrementally update it to progressively learn to segment an increasing number of surgical instruments over time. However, prior works constantly overlooked the potential of positive forward knowledge transfer, i.e., how past knowledge could help learn new classes, and positive backward knowledge transfer, i.e., how learning new classes could help refine past knowledge. In this paper, we propose a self-reflection hierarchical prompt framework that unlocks the power of positive forward and backward knowledge transfer in class incremental segmentation, aiming to proficiently learn new instruments, improve existing skills of regular instruments, and avoid catastrophic forgetting of old instruments. Our framework is built on a frozen, pre-trained model that adaptively appends instrument-aware prompts for new classes throughout training episodes. To enable positive forward knowledge transfer, we organize instrument prompts into a hierarchical prompt parsing tree with the instrument-shared prompt partition as the root node, n-part-shared prompt partitions as intermediate nodes and instrument-distinct prompt partitions as leaf nodes, to expose the reusable historical knowledge for new classes to simplify their learning. Conversely, to encourage positive backward knowledge transfer, we conduct self-reflection refining on existing knowledge by directed-weighted graph propagation, examining the knowledge associations recorded in the tree to improve its representativeness without causing catastrophic forgetting. Our framework is applicable to both CNN-based models and advanced transformer-based foundation models, yielding more than 5% and 11% improvements over the competing methods on two public benchmarks respectively.

Unlocking Positive Transfer in Incrementally Learning Surgical Instruments: A Self-reflection Hierarchical Prompt Framework

Abstract

To continuously enhance model adaptability in surgical video scene parsing, recent studies incrementally update it to progressively learn to segment an increasing number of surgical instruments over time. However, prior works constantly overlooked the potential of positive forward knowledge transfer, i.e., how past knowledge could help learn new classes, and positive backward knowledge transfer, i.e., how learning new classes could help refine past knowledge. In this paper, we propose a self-reflection hierarchical prompt framework that unlocks the power of positive forward and backward knowledge transfer in class incremental segmentation, aiming to proficiently learn new instruments, improve existing skills of regular instruments, and avoid catastrophic forgetting of old instruments. Our framework is built on a frozen, pre-trained model that adaptively appends instrument-aware prompts for new classes throughout training episodes. To enable positive forward knowledge transfer, we organize instrument prompts into a hierarchical prompt parsing tree with the instrument-shared prompt partition as the root node, n-part-shared prompt partitions as intermediate nodes and instrument-distinct prompt partitions as leaf nodes, to expose the reusable historical knowledge for new classes to simplify their learning. Conversely, to encourage positive backward knowledge transfer, we conduct self-reflection refining on existing knowledge by directed-weighted graph propagation, examining the knowledge associations recorded in the tree to improve its representativeness without causing catastrophic forgetting. Our framework is applicable to both CNN-based models and advanced transformer-based foundation models, yielding more than 5% and 11% improvements over the competing methods on two public benchmarks respectively.

Paper Structure

This paper contains 24 sections, 12 equations, 7 figures, 6 tables, 1 algorithm.

Figures (7)

  • Figure 1: (a) Prior works learn each instrument prompt independently. (b,c) They overlook the power of positive forward and backward transfer between early-acquired and newly-acquired knowledge. (d) In contrast, our work organizes instrument prompts from general to specific as a hierarchical prompt parsing tree. (e) It supports positive forward transfer by helping new instrument prompts inherit profitable knowledge from the past to assist new-class learning. (f) It further enables positive backward transfer by letting the newly-acquired knowledge wisely refine early-acquired knowledge to improve old-class segmentation.
  • Figure 1: Illustration of HPPT construction.
  • Figure 2: Overview of our self-reflection hierarchical prompt framework. We progressively append instrument-aware prompts into a pre-trained model during the training lifespan. For each new class, before diving into learning its specific prompts, we first build a hierarchical prompt parsing tree to discover the profitable knowledge it could inherit from the past, i.e., the instrument-shared (IS) and $n$-part-shared knowledge, promoting new-class learning by positive forward knowledge transfer. Simply learning one instrument-distinct (ID) prompt partition would be sufficient to delineate its contour. Based on the newly acquired ID partition, we trigger self-reflection on existing knowledge to support positive backward knowledge transfer, wisely refining their representativeness within the context of all seen classes without causing catastrophic forgetting.
  • Figure 2: Visualization comparison of segmentation results across all methods. Rows 1–2, 4-5: previous datasets. Row 3, 6: current dataset.
  • Figure 3: Visual comparisons of our approaches and highly competitive approaches. More comparison results are in the supplementary.
  • ...and 2 more figures