Table of Contents
Fetching ...

MedOpenClaw: Auditable Medical Imaging Agents Reasoning over Uncurated Full Studies

Weixiang Shen, Yanzhu Hu, Che Liu, Junde Wu, Jiayuan Zhu, Chengzhi Shen, Min Xu, Yueming Jin, Benedikt Wiestler, Daniel Rueckert, Jiazhen Pan

Abstract

Currently, evaluating vision-language models (VLMs) in medical imaging tasks oversimplifies clinical reality by relying on pre-selected 2D images that demand significant manual labor to curate. This setup misses the core challenge of realworld diagnostics: a true clinical agent must actively navigate full 3D volumes across multiple sequences or modalities to gather evidence and ultimately support a final decision. To address this, we propose MEDOPENCLAW, an auditable runtime designed to let VLMs operate dynamically within standard medical tools or viewers (e.g., 3D Slicer). On top of this runtime, we introduce MEDFLOWBENCH, a full-study medical imaging benchmark covering multi-sequence brain MRI and lung CT/PET. It systematically evaluates medical agentic capabilities across viewer-only, tool-use, and open-method tracks. Initial results reveal a critical insight: while state-of-the-art LLMs/VLMs (e.g., Gemini 3.1 Pro and GPT-5.4) can successfully navigate the viewer to solve basic study-level tasks, their performance paradoxically degrades when given access to professional support tools due to a lack of precise spatial grounding. By bridging the gap between static-image perception and interactive clinical workflows, MEDOPENCLAW and MEDFLOWBENCH establish a reproducible foundation for developing auditable, full-study medical imaging agents.

MedOpenClaw: Auditable Medical Imaging Agents Reasoning over Uncurated Full Studies

Abstract

Currently, evaluating vision-language models (VLMs) in medical imaging tasks oversimplifies clinical reality by relying on pre-selected 2D images that demand significant manual labor to curate. This setup misses the core challenge of realworld diagnostics: a true clinical agent must actively navigate full 3D volumes across multiple sequences or modalities to gather evidence and ultimately support a final decision. To address this, we propose MEDOPENCLAW, an auditable runtime designed to let VLMs operate dynamically within standard medical tools or viewers (e.g., 3D Slicer). On top of this runtime, we introduce MEDFLOWBENCH, a full-study medical imaging benchmark covering multi-sequence brain MRI and lung CT/PET. It systematically evaluates medical agentic capabilities across viewer-only, tool-use, and open-method tracks. Initial results reveal a critical insight: while state-of-the-art LLMs/VLMs (e.g., Gemini 3.1 Pro and GPT-5.4) can successfully navigate the viewer to solve basic study-level tasks, their performance paradoxically degrades when given access to professional support tools due to a lack of precise spatial grounding. By bridging the gap between static-image perception and interactive clinical workflows, MEDOPENCLAW and MEDFLOWBENCH establish a reproducible foundation for developing auditable, full-study medical imaging agents.

Paper Structure

This paper contains 13 sections, 2 figures, 3 tables.

Figures (2)

  • Figure 1: Left: Conventional medical VQA benchmarks rely on pre-selected, diagnostically relevant 2D images as inputs. They evaluate black-box models, where neither the decision-making process nor the supporting evidence is observable. Right: In contrast, our proposed benchmark is built on our runtime, MedOpenClaw, which interacts with 3D Slicer through a bounded viewer interface. This setup produces a transparent reasoning process, including an explicit trace, evidence objects, and a grounded final answer. MedFlow-Bench is designed to evaluate this interactive loop. Moreover, toolkits such as MONAI cardoso2022monai can be seamlessly integrated into both the framework and the benchmark. Beyond evaluation, the entire system can function as a medical imaging copilot, MedCopilot, assisting clinicians and alleviating the complexity of real-world workflows.
  • Figure 2: Representative auditable execution traces from the Brain MRI (top) and Lung CT/PET (bottom) modules under the Tool-Use setting. This demonstration compresses the actual longer runtime logs into briefer decision-relevant steps and workflows. The complete action chain, including the corresponding tool arguments, visual outputs, and final reports/answers are all auditable.