Depth-Driven Geometric Prompt Learning for Laparoscopic Liver Landmark Detection
Jialun Pei, Ruize Cui, Yaoqian Li, Weixin Si, Jing Qin, Pheng-Ann Heng
TL;DR
This work addresses robust liver landmark detection in dynamic laparoscopic scenes to support 2D-3D fusion and AR guidance. It proposes D$^2$GPLand, a depth-driven geometric prompt learning network that fuses RGB and depth cues via a CNN encoder and a frozen SAM encoder, guided by Depth-aware Prompt Embedding and Semantic-specific Geometric Augmentation. On the L3D dataset, it achieves state-of-the-art performance across metrics and demonstrates notable improvements over 12 baselines, establishing strong generalization across sites. The availability of the L3D dataset and the proposed method offer a practical pathway to real-time, geometry-aware intraoperative guidance in liver surgery.
Abstract
Laparoscopic liver surgery poses a complex intraoperative dynamic environment for surgeons, where remains a significant challenge to distinguish critical or even hidden structures inside the liver. Liver anatomical landmarks, e.g., ridge and ligament, serve as important markers for 2D-3D alignment, which can significantly enhance the spatial perception of surgeons for precise surgery. To facilitate the detection of laparoscopic liver landmarks, we collect a novel dataset called L3D, which comprises 1,152 frames with elaborated landmark annotations from surgical videos of 39 patients across two medical sites. For benchmarking purposes, 12 mainstream detection methods are selected and comprehensively evaluated on L3D. Further, we propose a depth-driven geometric prompt learning network, namely D2GPLand. Specifically, we design a Depth-aware Prompt Embedding (DPE) module that is guided by self-supervised prompts and generates semantically relevant geometric information with the benefit of global depth cues extracted from SAM-based features. Additionally, a Semantic-specific Geometric Augmentation (SGA) scheme is introduced to efficiently merge RGB-D spatial and geometric information through reverse anatomic perception. The experimental results indicate that D2GPLand obtains state-of-the-art performance on L3D, with 63.52% DICE and 48.68% IoU scores. Together with 2D-3D fusion technology, our method can directly provide the surgeon with intuitive guidance information in laparoscopic scenarios.
