Table of Contents
Fetching ...

Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection

Jialun Pei, Ruize Cui, Yaoqian Li, Weixin Si, Jing Qin, Pheng-Ann Heng

TL;DR

This work addresses robust liver landmark detection in dynamic laparoscopic scenes to support 2D-3D fusion and AR guidance. It proposes D$^2$GPLand, a depth-driven geometric prompt learning network that fuses RGB and depth cues via a CNN encoder and a frozen SAM encoder, guided by Depth-aware Prompt Embedding and Semantic-specific Geometric Augmentation. On the L3D dataset, it achieves state-of-the-art performance across metrics and demonstrates notable improvements over 12 baselines, establishing strong generalization across sites. The availability of the L3D dataset and the proposed method offer a practical pathway to real-time, geometry-aware intraoperative guidance in liver surgery.

Abstract

Laparoscopic liver surgery poses a complex intraoperative dynamic environment for surgeons, where remains a significant challenge to distinguish critical or even hidden structures inside the liver. Liver anatomical landmarks, e.g., ridge and ligament, serve as important markers for 2D-3D alignment, which can significantly enhance the spatial perception of surgeons for precise surgery. To facilitate the detection of laparoscopic liver landmarks, we collect a novel dataset called L3D, which comprises 1,152 frames with elaborated landmark annotations from surgical videos of 39 patients across two medical sites. For benchmarking purposes, 12 mainstream detection methods are selected and comprehensively evaluated on L3D. Further, we propose a depth-driven geometric prompt learning network, namely D2GPLand. Specifically, we design a Depth-aware Prompt Embedding (DPE) module that is guided by self-supervised prompts and generates semantically relevant geometric information with the benefit of global depth cues extracted from SAM-based features. Additionally, a Semantic-specific Geometric Augmentation (SGA) scheme is introduced to efficiently merge RGB-D spatial and geometric information through reverse anatomic perception. The experimental results indicate that D2GPLand obtains state-of-the-art performance on L3D, with 63.52% DICE and 48.68% IoU scores. Together with 2D-3D fusion technology, our method can directly provide the surgeon with intuitive guidance information in laparoscopic scenarios.

Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection

TL;DR

This work addresses robust liver landmark detection in dynamic laparoscopic scenes to support 2D-3D fusion and AR guidance. It proposes DGPLand, a depth-driven geometric prompt learning network that fuses RGB and depth cues via a CNN encoder and a frozen SAM encoder, guided by Depth-aware Prompt Embedding and Semantic-specific Geometric Augmentation. On the L3D dataset, it achieves state-of-the-art performance across metrics and demonstrates notable improvements over 12 baselines, establishing strong generalization across sites. The availability of the L3D dataset and the proposed method offer a practical pathway to real-time, geometry-aware intraoperative guidance in liver surgery.

Abstract

Laparoscopic liver surgery poses a complex intraoperative dynamic environment for surgeons, where remains a significant challenge to distinguish critical or even hidden structures inside the liver. Liver anatomical landmarks, e.g., ridge and ligament, serve as important markers for 2D-3D alignment, which can significantly enhance the spatial perception of surgeons for precise surgery. To facilitate the detection of laparoscopic liver landmarks, we collect a novel dataset called L3D, which comprises 1,152 frames with elaborated landmark annotations from surgical videos of 39 patients across two medical sites. For benchmarking purposes, 12 mainstream detection methods are selected and comprehensively evaluated on L3D. Further, we propose a depth-driven geometric prompt learning network, namely D2GPLand. Specifically, we design a Depth-aware Prompt Embedding (DPE) module that is guided by self-supervised prompts and generates semantically relevant geometric information with the benefit of global depth cues extracted from SAM-based features. Additionally, a Semantic-specific Geometric Augmentation (SGA) scheme is introduced to efficiently merge RGB-D spatial and geometric information through reverse anatomic perception. The experimental results indicate that D2GPLand obtains state-of-the-art performance on L3D, with 63.52% DICE and 48.68% IoU scores. Together with 2D-3D fusion technology, our method can directly provide the surgeon with intuitive guidance information in laparoscopic scenarios.

Paper Structure

This paper contains 13 sections, 2 equations, 3 figures, 4 tables.

Figures (3)

  • Figure 1: Augmented visualization of liver tumor in the laparoscopic video via anatomic landmarks. With consistent anatomical landmarks on 2D frames (middle) and 3D geometry (left), the preoperative 3D anatomy can be overlaid on the intraoperative 2D image for augmented visualization guidance (right).
  • Figure 2: Overview of the proposed D$^2$GPLand. $s,~l$, $r$ denote the three types of landmarks, silhouette, ligament, and ridge, to be detected.
  • Figure 3: Visualizations of our D$^2$GPLand and competitors on L3D test set.