Table of Contents
Fetching ...

Automated Clinical Data Extraction with Knowledge Conditioned LLMs

Diya Li, Asim Kadav, Aijing Gao, Rui Li, Richard Bourgon

TL;DR

This work proposes a novel framework that aligns generated internal knowledge with external knowledge through in-context learning (ICL) and employs a retriever to identify relevant units of internal or external knowledge and a grader to evaluate the truthfulness and helpfulness of the retrieved internal-knowledge rules, to align and update the knowledge bases.

Abstract

The extraction of lung lesion information from clinical and medical imaging reports is crucial for research on and clinical care of lung-related diseases. Large language models (LLMs) can be effective at interpreting unstructured text in reports, but they often hallucinate due to a lack of domain-specific knowledge, leading to reduced accuracy and posing challenges for use in clinical settings. To address this, we propose a novel framework that aligns generated internal knowledge with external knowledge through in-context learning (ICL). Our framework employs a retriever to identify relevant units of internal or external knowledge and a grader to evaluate the truthfulness and helpfulness of the retrieved internal-knowledge rules, to align and update the knowledge bases. Experiments with expert-curated test datasets demonstrate that this ICL approach can increase the F1 score for key fields (lesion size, margin and solidity) by an average of 12.9% over existing ICL methods.

Automated Clinical Data Extraction with Knowledge Conditioned LLMs

TL;DR

This work proposes a novel framework that aligns generated internal knowledge with external knowledge through in-context learning (ICL) and employs a retriever to identify relevant units of internal or external knowledge and a grader to evaluate the truthfulness and helpfulness of the retrieved internal-knowledge rules, to align and update the knowledge bases.

Abstract

The extraction of lung lesion information from clinical and medical imaging reports is crucial for research on and clinical care of lung-related diseases. Large language models (LLMs) can be effective at interpreting unstructured text in reports, but they often hallucinate due to a lack of domain-specific knowledge, leading to reduced accuracy and posing challenges for use in clinical settings. To address this, we propose a novel framework that aligns generated internal knowledge with external knowledge through in-context learning (ICL). Our framework employs a retriever to identify relevant units of internal or external knowledge and a grader to evaluate the truthfulness and helpfulness of the retrieved internal-knowledge rules, to align and update the knowledge bases. Experiments with expert-curated test datasets demonstrate that this ICL approach can increase the F1 score for key fields (lesion size, margin and solidity) by an average of 12.9% over existing ICL methods.

Paper Structure

This paper contains 34 sections, 3 figures, 10 tables, 1 algorithm.

Figures (3)

  • Figure 1: Example of lung lesion information extraction. Two findings (one describing a single lesion, and the other, two lesions) were identified in the source text. Example rules from the generated internal knowledge base are shown. First-stage finding detection and primary structured field parsing is followed by a second stage that further parses lesion description text.
  • Figure 2: Framework for two-stage knowledge conditioned clinical data extraction. The symbol indicates that the module is implemented by prompting an LLM. Rules used in prompts for lesion finding detection are derived from the internal knowledge base and aligned with external knowledge by a grader. Unstructured lesion description text is extracted in stage 1. In stage 2, this text is parsed into structured fields by providing the LLM with additional specialized inputs, including a controlled vocabulary.
  • Figure 3: Heatmap of lesion size extraction performance with varying values for the retriever's top-$k$ hyper-parameter, for both lung-related and lung-irrelevant rules.