Table of Contents
Fetching ...

AI Disclosure with DAISY

Yoana Ahmetoglu, Marios Constantinides, Anna Cox

Abstract

The use of AI tools in research is becoming routine, alongside growing consensus that such use should be transparently disclosed. However, AI disclosure statements remain rare and inconsistent, with policies offering limited guidance and authors facing social, cognitive, and emotional barriers when reporting AI use. To explore how structured disclosure shapes what authors report and how they experience disclosure, we present DAISY (Disclosure of AI-uSe in Your Research), a form-based tool for generating AI disclosure statements. DAISY was developed from literature-derived requirements and co-design (N =11), and deployed in a user study with authors (N=31). DAISY-supported disclosures met more completeness criteria, offering clearer breakdowns of AI use across research and writing than unsupported disclosures. Surprisingly, despite concerns about how transparently disclosed AI use might be perceived, the use of DAISY did not reduce author comfort with the disclosure statements. We discuss design implications and a research agenda for AI disclosure as a sociotechnical practice.

AI Disclosure with DAISY

Abstract

The use of AI tools in research is becoming routine, alongside growing consensus that such use should be transparently disclosed. However, AI disclosure statements remain rare and inconsistent, with policies offering limited guidance and authors facing social, cognitive, and emotional barriers when reporting AI use. To explore how structured disclosure shapes what authors report and how they experience disclosure, we present DAISY (Disclosure of AI-uSe in Your Research), a form-based tool for generating AI disclosure statements. DAISY was developed from literature-derived requirements and co-design (N =11), and deployed in a user study with authors (N=31). DAISY-supported disclosures met more completeness criteria, offering clearer breakdowns of AI use across research and writing than unsupported disclosures. Surprisingly, despite concerns about how transparently disclosed AI use might be perceived, the use of DAISY did not reduce author comfort with the disclosure statements. We discuss design implications and a research agenda for AI disclosure as a sociotechnical practice.

Paper Structure

This paper contains 47 sections, 7 figures, 3 tables.

Figures (7)

  • Figure 1: Speculative AI disclosure artefacts created by 11 participants (P1-11) during four co-design sessions, sorted by participant background and layout.
  • Figure 2: The DAISY interface. The left panel shows the default, unfilled form. The right panel shows a completed form and the resulting AI disclosure statement, including the editable output reviewed and refined by the author prior to submission.
  • Figure 3: (Left) Mean character count in disclosure statements by condition. Error bars indicate $\pm$1 SD. (Right) Mean presence (0–1) of disclosure statements completeness criteria by condition.
  • Figure 4: (Left) Mean comfort ratings (0–10) by condition (±1 SD). (Middle) Preferred disclosure approach, showing the number of participants selecting each option. (Right) Participants’ reported likelihood of using DAISY in the future, recommending DAISY to colleagues, and perceived ease of creating an AI disclosure statement with DAISY (0–10 scales; ±1 SD)
  • Figure 5: A speculative conceptual design space for AI disclosure tools. The horizontal axis contrasts author self-report with automated capture, while the vertical axis contrasts procedural compliance with reflection and transparency. The four quadrants illustrate distinct approaches to disclosure identified through our findings.
  • ...and 2 more figures