Table of Contents
Fetching ...

AI Agents Under EU Law

Luca Nannini, Adam Leon Smith, Michele Joshua Maggini, Enrico Panai, Sandra Feliciano, Aleksandr Tiulkanov, Elena Maran, James Gealy, Piercosma Bisconti

Abstract

AI agents - i.e. AI systems that autonomously plan, invoke external tools, and execute multi-step action chains with reduced human involvement - are being deployed at scale across enterprise functions ranging from customer service and recruitment to clinical decision support and critical infrastructure management. The EU AI Act (Regulation 2024/1689) regulates these systems through a risk-based framework, but it does not operate in isolation: providers face simultaneous obligations under the GDPR, the Cyber Resilience Act, the Digital Services Act, the Data Act, the Data Governance Act, sector-specific legislation, the NIS2 Directive, and the revised Product Liability Directive. This paper provides the first systematic regulatory mapping for AI agent providers integrating (a) draft harmonised standards under Standardisation Request M/613 to CEN/CENELEC JTC 21 as of January 2026, (b) the GPAI Code of Practice published in July 2025, (c) the CRA harmonised standards programme under Mandate M/606 accepted in April 2025, and (d) the Digital Omnibus proposals of November 2025. We present a practical taxonomy of nine agent deployment categories mapping concrete actions to regulatory triggers, identify agent-specific compliance challenges in cybersecurity, human oversight, transparency across multi-party action chains, and runtime behavioral drift. We propose a twelve-step compliance architecture and a regulatory trigger mapping connecting agent actions to applicable legislation. We conclude that high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements, and that the provider's foundational compliance task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons.

AI Agents Under EU Law

Abstract

AI agents - i.e. AI systems that autonomously plan, invoke external tools, and execute multi-step action chains with reduced human involvement - are being deployed at scale across enterprise functions ranging from customer service and recruitment to clinical decision support and critical infrastructure management. The EU AI Act (Regulation 2024/1689) regulates these systems through a risk-based framework, but it does not operate in isolation: providers face simultaneous obligations under the GDPR, the Cyber Resilience Act, the Digital Services Act, the Data Act, the Data Governance Act, sector-specific legislation, the NIS2 Directive, and the revised Product Liability Directive. This paper provides the first systematic regulatory mapping for AI agent providers integrating (a) draft harmonised standards under Standardisation Request M/613 to CEN/CENELEC JTC 21 as of January 2026, (b) the GPAI Code of Practice published in July 2025, (c) the CRA harmonised standards programme under Mandate M/606 accepted in April 2025, and (d) the Digital Omnibus proposals of November 2025. We present a practical taxonomy of nine agent deployment categories mapping concrete actions to regulatory triggers, identify agent-specific compliance challenges in cybersecurity, human oversight, transparency across multi-party action chains, and runtime behavioral drift. We propose a twelve-step compliance architecture and a regulatory trigger mapping connecting agent actions to applicable legislation. We conclude that high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements, and that the provider's foundational compliance task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons.

Paper Structure

This paper contains 54 sections, 5 figures, 1 table.

Figures (5)

  • Figure 1: Non-exhaustive taxonomy of AI Agent Use Cases and Actions, detailing the concrete tasks performed across different domains using a shared LLM-based architecture.
  • Figure 2: Multi-layer regulatory architecture for AI agent providers. The agent's external actions (top) determine which obligations are activated. The provider's primary compliance scope (top three layers) flows through the AI Act essential requirements, the harmonised standards, and the GPAI model layer. Adjacent EU instruments (bottom six boxes) are triggered by the agent's external effects: which data is processed during training or at inference time (GDPR), which products the agent interfaces with (CRA, Data Act), where it publishes (DSA), which sector it operates in (NIS2, MDR, MiFID II), and what harm its outputs may cause (PLD). Dashed bi-directional arrows indicate interaction between adjacent instruments.
  • Figure 3: Operational mapping of agent-specific characteristics to amplified compliance challenges and the structural solutions required by the AI Act's essential requirements as operationalised through the harmonised standards discussed in Section 6. The image visually contrasts 'Agent Characteristic' with 'Amplified Compliance Challenge,' leading to the specific 'Operational Solution.' The diagram reinforces that cybersecurity, human oversight, and behavioral drift require enforcement mechanisms located outside the model inference process (API level).
  • Figure 4: The Multi-Layer Compliance Architecture for AI under EU Law. The diagram illustrates that the AI Act is only one layer in a complex, intersecting regulatory ecosystem, where horizontal frameworks (GDPR, Data Act, CRA) and sectoral rules must be applied simultaneously based on the agent's specific context.
  • Figure 5: Twelve-step compliance sequence for AI agent providers (Section 8.1). Three decision nodes determine the compliance pathway: Step 0 gates AI Act applicability (non-AI systems face only adjacent legislation); Step 2 determines whether full Chapter III essential requirements or Article 50 transparency obligations apply; Step 8 determines whether CRA product cybersecurity runs in parallel. The dotted feedback loop from Step 11 to Step 4 reflects the post-market monitoring obligation: when behavioral drift is detected, the risk management process must reassess whether the change constitutes substantial modification under Article 3(23). Agent-specific considerations annotated at each step: system-counting (Step 0), GPAI compute threshold (Step 1), foreseeable-misuse analysis (Step 2), fundamental rights competence (Step 4), automation boundary design (Step 6), privilege enforcement outside the model (Step 7), and external-action inventory (Step 9).