Table of Contents
Fetching ...

Disclosure or Marketing? Analyzing the Efficacy of Vendor Self-reports for Vetting Public-sector AI

Blaine Kuehnert, Nari Johnson, Ravit Dotan, Hoda Heidari

Abstract

Documentation-based disclosure has become a central governance strategy for responsible AI, particularly in public-sector procurement. Tools such as model cards, datasheets, and AI FactSheets are increasingly expected to support accountability, risk assessment, and informed decision-making across organizational boundaries. Yet there is limited empirical evidence about how these artifacts are produced, interpreted, and used in practice. In this paper, we present a qualitative study of the GovAI Coalition FactSheet, a widely adopted transparency document designed to support AI procurement and governance in government contexts. Drawing on semi-structured interviews with vendors and public-sector practitioners, alongside a systematic analysis of completed FactSheets, we examine how FactSheets are used, what information they surface, and where they fall short. We find that FactSheets are asked to serve multiple and conflicting purposes simultaneously: showcasing vendor offerings, supporting evaluation and due diligence, and facilitating early-stage dialogue between vendors and agencies. These competing expectations, combined with the structural constraints of voluntary and public self-disclosure, limit the ability of FactSheets to function as standalone evaluation or risk-assessment tools. At the same time, our findings suggest that when understood as relational artifacts used to establish trust, shared understanding, and ongoing dialogue, FactSheets can help create conditions that support more meaningful disclosure and governance over time.

Disclosure or Marketing? Analyzing the Efficacy of Vendor Self-reports for Vetting Public-sector AI

Abstract

Documentation-based disclosure has become a central governance strategy for responsible AI, particularly in public-sector procurement. Tools such as model cards, datasheets, and AI FactSheets are increasingly expected to support accountability, risk assessment, and informed decision-making across organizational boundaries. Yet there is limited empirical evidence about how these artifacts are produced, interpreted, and used in practice. In this paper, we present a qualitative study of the GovAI Coalition FactSheet, a widely adopted transparency document designed to support AI procurement and governance in government contexts. Drawing on semi-structured interviews with vendors and public-sector practitioners, alongside a systematic analysis of completed FactSheets, we examine how FactSheets are used, what information they surface, and where they fall short. We find that FactSheets are asked to serve multiple and conflicting purposes simultaneously: showcasing vendor offerings, supporting evaluation and due diligence, and facilitating early-stage dialogue between vendors and agencies. These competing expectations, combined with the structural constraints of voluntary and public self-disclosure, limit the ability of FactSheets to function as standalone evaluation or risk-assessment tools. At the same time, our findings suggest that when understood as relational artifacts used to establish trust, shared understanding, and ongoing dialogue, FactSheets can help create conditions that support more meaningful disclosure and governance over time.

Paper Structure

This paper contains 44 sections, 2 figures, 3 tables.

Figures (2)

  • Figure 1: Unpacking the GovAI FactSheet's role in facilitating meaningful disclosures. Our findings begin by understanding what types of disclosures are currently being made, and then move outwards to examine the perspectives of key stakeholders (AI vendors and purchasing governments) and broader ecosystems of incentives that enable or disable meaningful disclosures about public sector AI systems. Understanding the broader incentives surrounding the FactSheet brings clarity to understanding how and why they were completed.
  • Figure 2: While vendors frequently provide meaningful information about intended use and contextual limitations, disclosures relevant to evaluation, governance, and training data remain sparse. This asymmetry underscores the mismatch between expectations that FactSheets function as evaluation tools and the realities of voluntary, public-facing documentation, reinforcing their practical role as relational, early-stage governance artifacts.