Table of Contents
Fetching ...

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies

Yi Zeng, Kevin Klyman, Andy Zhou, Yu Yang, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li

TL;DR

AIR 2024 builds a unified AI risk taxonomy grounded in public and private policies by deriving 314 risk categories across a four-level hierarchy. It employs a bottom-up qualitative analysis to map risk descriptions from 24 policy documents and 16 company policies into four high-level domains: System & Operational, Content Safety, Societal, and Legal & Rights. The paper analyzes private-sector policy coverage, maps public-sector regulations from the EU, US, and China, and examines cross-jurisdiction commonalities and gaps, including a Chinese-case alignment case study. It argues that the policy-grounded AIR 2024 framework can support safer deployment of generative AI by enabling standardized risk assessment, cross-sector collaboration, and more coherent benchmarking and regulation.

Abstract

We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies

TL;DR

AIR 2024 builds a unified AI risk taxonomy grounded in public and private policies by deriving 314 risk categories across a four-level hierarchy. It employs a bottom-up qualitative analysis to map risk descriptions from 24 policy documents and 16 company policies into four high-level domains: System & Operational, Content Safety, Societal, and Legal & Rights. The paper analyzes private-sector policy coverage, maps public-sector regulations from the EU, US, and China, and examines cross-jurisdiction commonalities and gaps, including a Chinese-case alignment case study. It argues that the policy-grounded AIR 2024 framework can support safer deployment of generative AI by enabling standardized risk assessment, cross-sector collaboration, and more coherent benchmarking and regulation.

Abstract

We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.

Paper Structure

This paper contains 19 sections, 7 figures, 8 tables.

Figures (7)

  • Figure 1: Overview of the AI risk taxonomy derived from 24 policy and regulatory documents, encompassing 314 unique risk categories. Charts on the right-hand side map to major AI regulations.
  • Figure 2: EU regulations specified AI risks mapped as 23 level-3 categories in the AIR 2024.
  • Figure 3: High-risk and unacceptable risk categories under the EU AI Act.
  • Figure 4: The risks included in the White House AI Executive Order mapped as 20 level-3 categories in the AIR 2024.
  • Figure 5: Chinese regulatory efforts specified risks mapped as 23 level-3 categories in the AIR 2024.
  • ...and 2 more figures