Table of Contents
Fetching ...

Hardware-Level Governance of AI Compute: A Feasibility Taxonomy for Regulatory Compliance and Treaty Verification

Samar Ansari

Abstract

The governance of frontier AI increasingly relies on controlling access to computational resources, yet the hardware-level mechanisms invoked by policy proposals remain largely unexamined from an engineering perspective. This paper bridges the gap between AI governance and computer engineering by proposing a taxonomy of 20 hardware-level governance mechanisms, organised by function (monitoring, verification, enforcement) and assessed for technical feasibility on a four-point scale from currently deployable to speculative. For each mechanism, we provide a technical description, a feasibility rating, and an identification of adversarial vulnerabilities. We map the taxonomy onto four governance scenarios: domestic regulation, bilateral agreements, multilateral treaty verification, and industry self-regulation. Our analysis reveals a structural mismatch: the mechanisms most needed for treaty verification, including on-chip compute metering, cryptographic proof-of-training, and hardware-embedded enforcement, are also the least mature. We assess principal threats to compute-based governance, including algorithmic efficiency gains, distributed training methods, and sovereignty concerns. We identify a temporal constraint: the window during which semiconductor manufacturing concentration makes hardware-level governance implementable is narrowing, while R&D timelines for critical mechanisms span years. We present an adversary-tiered threat analysis distinguishing commercial, non-state, and nation-state actors, arguing the appropriate security standard is tamper-evident assurance analogous to IAEA verification rather than absolute tamper-proofing. The taxonomy, feasibility classification, and mechanism-to-scenario mapping provide a technical foundation for policymakers and identify the R&D investments required before hardware-level governance can support verifiable international agreements.

Hardware-Level Governance of AI Compute: A Feasibility Taxonomy for Regulatory Compliance and Treaty Verification

Abstract

The governance of frontier AI increasingly relies on controlling access to computational resources, yet the hardware-level mechanisms invoked by policy proposals remain largely unexamined from an engineering perspective. This paper bridges the gap between AI governance and computer engineering by proposing a taxonomy of 20 hardware-level governance mechanisms, organised by function (monitoring, verification, enforcement) and assessed for technical feasibility on a four-point scale from currently deployable to speculative. For each mechanism, we provide a technical description, a feasibility rating, and an identification of adversarial vulnerabilities. We map the taxonomy onto four governance scenarios: domestic regulation, bilateral agreements, multilateral treaty verification, and industry self-regulation. Our analysis reveals a structural mismatch: the mechanisms most needed for treaty verification, including on-chip compute metering, cryptographic proof-of-training, and hardware-embedded enforcement, are also the least mature. We assess principal threats to compute-based governance, including algorithmic efficiency gains, distributed training methods, and sovereignty concerns. We identify a temporal constraint: the window during which semiconductor manufacturing concentration makes hardware-level governance implementable is narrowing, while R&D timelines for critical mechanisms span years. We present an adversary-tiered threat analysis distinguishing commercial, non-state, and nation-state actors, arguing the appropriate security standard is tamper-evident assurance analogous to IAEA verification rather than absolute tamper-proofing. The taxonomy, feasibility classification, and mechanism-to-scenario mapping provide a technical foundation for policymakers and identify the R&D investments required before hardware-level governance can support verifiable international agreements.

Paper Structure

This paper contains 58 sections, 5 figures, 2 tables.

Figures (5)

  • Figure 1: Overview of the 20 hardware-level governance mechanisms organised by function (monitoring, verification, enforcement). Colour indicates the primary feasibility tier from Table \ref{['tab:feasibility']}; grey denotes mechanisms that span multiple tiers depending on the implementation variant. Mechanism identifiers (M/V/E) are used throughout Sections \ref{['sec:taxonomy']}--\ref{['sec:mapping']}.
  • Figure 2: The readiness gap between mechanism feasibility and multilateral treaty requirements. Bar length and label indicate each mechanism's importance for the treaty verification scenario (from Table \ref{['tab:mapping']}); bar colour indicates feasibility tier (from Table \ref{['tab:feasibility']}). The structural mismatch is visible: mechanisms rated as strong fits ($++$) for treaty verification are predominantly amber (requires R&D) or red (speculative), while currently deployable mechanisms (blue) have limited treaty applicability.
  • Figure 3: Adversary-tiered threat model for hardware-embedded governance mechanisms. Each tier is characterised by its primary attack surface, the corresponding defense standard, and the scalability of attacks. The policy implication is that tamper resistance need not be absolute: it must make circumvention more expensive than compliance for Tiers 1--2 and leave detectable evidence for Tier 3.
  • Figure 4: Layered governance architecture mapping mechanisms to institutional models. Layer 1 (domestic regulation) builds on currently deployable mechanisms following the FATF model. Layer 2 (bilateral enforcement) extends governance through export controls and hardware tracking. Layer 3 (multilateral verification) requires the most technically demanding mechanisms, following the IAEA model. Layers are designed to be built sequentially, with each providing a foundation for the next.
  • Figure 5: R&D and deployment timelines for the four highest-priority mechanisms (from Section \ref{['sec:feasibility-summary']}) against the estimated window of semiconductor manufacturing concentration. R&D phase estimates (1.5--4 years) are drawn from Aarne et al. aarne2024secure; a further 4-year deployment phase reflects their estimate for sufficiently widespread adoption. The pessimistic window boundary (dashed) represents a scenario where distributed training maturation and indigenous fab capacity building in restricted jurisdictions erode manufacturing leverage by approximately 2032; the optimistic boundary (dotted) extends this to approximately 2036. In either scenario, the margin between mechanism readiness and window closure is narrow, underscoring the urgency of near-term R&D investment.