AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies
Yi Zeng, Kevin Klyman, Andy Zhou, Yu Yang, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li
TL;DR
AIR 2024 builds a unified AI risk taxonomy grounded in public and private policies by deriving 314 risk categories across a four-level hierarchy. It employs a bottom-up qualitative analysis to map risk descriptions from 24 policy documents and 16 company policies into four high-level domains: System & Operational, Content Safety, Societal, and Legal & Rights. The paper analyzes private-sector policy coverage, maps public-sector regulations from the EU, US, and China, and examines cross-jurisdiction commonalities and gaps, including a Chinese-case alignment case study. It argues that the policy-grounded AIR 2024 framework can support safer deployment of generative AI by enabling standardized risk assessment, cross-sector collaboration, and more coherent benchmarking and regulation.
Abstract
We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
