Table of Contents
Fetching ...

Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations

Vasisht Duddu, Oskari Järvinen, Lachlan J Gunn, N Asokan

TL;DR

Laminator introduces hardware-assisted verifiable ML property cards to address the risk of false or tampered model transparency documents. By embedding multiple attestation types inside TEEs (notably SGX) and chaining them with external certificates, it binds training data, training configurations, and per-inference properties to verifiable signatures, producing trustworthy model cards, datasheets, and inference cards. The framework demonstrates efficiency, scalability, and versatility, showing low attestation overhead across several datasets and model sizes while enabling verifiable claims about data provenance, training procedures, and inference behavior. This hardware-first approach offers a practical path toward trusted ML marketplaces and regulatory compliance without disclosing sensitive data or incurring prohibitive cryptographic costs.

Abstract

Regulations increasingly call for various assurances from machine learning (ML) model providers about their training data, training process, and model behavior. For better transparency, industry (e.g., Huggingface and Google) has adopted model cards and datasheets to describe various properties of training datasets and models. In the same vein, we introduce the notion of inference cards to describe the properties of a given inference (e.g., binding of the output to the model and its corresponding input). We coin the term ML property cards to collectively refer to these various types of cards. To prevent a malicious model provider from including false information in ML property cards, they need to be verifiable. We show how to construct verifiable ML property cards using property attestation, technical mechanisms by which a prover (e.g., a model provider) can attest to various ML properties to a verifier (e.g., an auditor). Since prior attestation mechanisms based purely on cryptography are often narrowly focused (lacking versatility) and inefficient, we need an efficient mechanism to attest different types of properties across the entire ML model pipeline. Emerging widespread support for confidential computing has made it possible to run and even train models inside hardware-assisted trusted execution environments (TEEs), which provide highly efficient attestation mechanisms. We propose Laminator, which uses TEEs to provide the first framework for verifiable ML property cards via hardware-assisted ML property attestations. Laminator is efficient in terms of overhead, scalable to large numbers of verifiers, and versatile with respect to the properties it can prove during training or inference.

Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations

TL;DR

Laminator introduces hardware-assisted verifiable ML property cards to address the risk of false or tampered model transparency documents. By embedding multiple attestation types inside TEEs (notably SGX) and chaining them with external certificates, it binds training data, training configurations, and per-inference properties to verifiable signatures, producing trustworthy model cards, datasheets, and inference cards. The framework demonstrates efficiency, scalability, and versatility, showing low attestation overhead across several datasets and model sizes while enabling verifiable claims about data provenance, training procedures, and inference behavior. This hardware-first approach offers a practical path toward trusted ML marketplaces and regulatory compliance without disclosing sensitive data or incurring prohibitive cryptographic costs.

Abstract

Regulations increasingly call for various assurances from machine learning (ML) model providers about their training data, training process, and model behavior. For better transparency, industry (e.g., Huggingface and Google) has adopted model cards and datasheets to describe various properties of training datasets and models. In the same vein, we introduce the notion of inference cards to describe the properties of a given inference (e.g., binding of the output to the model and its corresponding input). We coin the term ML property cards to collectively refer to these various types of cards. To prevent a malicious model provider from including false information in ML property cards, they need to be verifiable. We show how to construct verifiable ML property cards using property attestation, technical mechanisms by which a prover (e.g., a model provider) can attest to various ML properties to a verifier (e.g., an auditor). Since prior attestation mechanisms based purely on cryptography are often narrowly focused (lacking versatility) and inefficient, we need an efficient mechanism to attest different types of properties across the entire ML model pipeline. Emerging widespread support for confidential computing has made it possible to run and even train models inside hardware-assisted trusted execution environments (TEEs), which provide highly efficient attestation mechanisms. We propose Laminator, which uses TEEs to provide the first framework for verifiable ML property cards via hardware-assisted ML property attestations. Laminator is efficient in terms of overhead, scalable to large numbers of verifiers, and versatile with respect to the properties it can prove during training or inference.

Paper Structure

This paper contains 28 sections, 3 figures, 6 tables, 7 algorithms.

Figures (3)

  • Figure 1: Overview of ML property attestations and their relation to different verifiable model cards: attestations for datasheet (orange), model card (blue), inference card (red), along with external certificates (cyan).
  • Figure 2: Overview of Laminator's design: Components already existing or part of the infrastructure are indicated in gray while components specific to Laminator are indicated in orange. There can be different enclaves for different attestations which are generated by changing the "measurer" (in green). Dashed lines correspond to assertions by a trusted certifier, while solid lines correspond to assertions by a TEE.
  • Figure 3: Inputs and outputs for different attestation enclaves.Laminator includes four different enclaves which are indicated in orange. Attestations are in ellipse, outputs are in blue, and inputs to enclaves are white. We indicate measurer in green.