Table of Contents
Fetching ...

Supporting Reflection and Forward-Looking Reasoning With Data-Driven Questions

Simon WS Fischer, Hanna Schraffenberger, Serge Thill, Pim Haselager

Abstract

Many generative AI systems as well as decision-support systems (DSSs) provide operators with predictions or recommendations. Various studies show, however, that people can mistakenly adopt the erroneous results presented by those systems. Hence, it is crucial to promote critical thinking and reflection during interaction. One approach we are focusing on involves encouraging reflection during machine-assisted decision-making by presenting decision-makers with data-driven questions. In this short paper, we provide a brief overview of our work in that regard, namely: 1) the development of a question taxonomy, 2) the development of a prototype in the medical domain and the feedback received from clinicians, 3) a method for generating questions using a large language model, and 4) a proposed scale for measuring cognitive engagement in human-AI decision-making. In doing so, we contribute to the discussion about the design, development, and evaluation of tools for thought, i.e., AI systems that provoke critical thinking and enable novel ways of sense-making.

Supporting Reflection and Forward-Looking Reasoning With Data-Driven Questions

Abstract

Many generative AI systems as well as decision-support systems (DSSs) provide operators with predictions or recommendations. Various studies show, however, that people can mistakenly adopt the erroneous results presented by those systems. Hence, it is crucial to promote critical thinking and reflection during interaction. One approach we are focusing on involves encouraging reflection during machine-assisted decision-making by presenting decision-makers with data-driven questions. In this short paper, we provide a brief overview of our work in that regard, namely: 1) the development of a question taxonomy, 2) the development of a prototype in the medical domain and the feedback received from clinicians, 3) a method for generating questions using a large language model, and 4) a proposed scale for measuring cognitive engagement in human-AI decision-making. In doing so, we contribute to the discussion about the design, development, and evaluation of tools for thought, i.e., AI systems that provoke critical thinking and enable novel ways of sense-making.

Paper Structure

This paper contains 8 sections, 2 figures, 1 table.

Figures (2)

  • Figure 1: The interface of our prototype: The bar charts show predictions of the effectiveness of three possible treatment options, divided into responder (success) and non-responder (failure) categories. At the bottom, reflective questions are displayed in plain text. The current tab shows the question about the possibility to change an input feature to make the effectiveness of a treatment option more likely (minimum change in input with the maximum effect in outcome). This counterfactual thinking can provide insights into the workings of the DSS. In addition, it could provide an opportunity to first consider other intervention options, such as therapy, and only then consider other treatments, such as surgery.
  • Figure 2: A flowchart illustrating our question generation system. Based on input data, such as patient information, a decision-support system computes a prediction. An explanation is then generated for this prediction in the form of feature contribution (LIME). Finally, the explanation and the prediction are passed to a language model to formulate a question. The generated question helps the decision-maker reflect on the prediction and decision at hand.