Security/EU AI Act

EU AI Act Compliance for AI Agents

The EU AI Act is the world's first binding AI regulation. If your agentic AI systems operate in or serve EU citizens, you need to understand risk classification, mandatory requirements, and how runtime authorization maps to specific Articles.

Last updated: April 2026

What is the EU AI Act?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is a risk-based regulatory framework that classifies AI systems into four tiers—unacceptable, high, limited, and minimal risk—and imposes proportional obligations on providers and deployers. It entered into force on August 1, 2024, with phased enforcement beginning February 2025. AI agents that take autonomous actions (tool calls, API interactions, data processing) are subject to the Act when deployed within the EU or when their output affects EU citizens.

Risk classification for AI agents

Article 6 of the EU AI Act establishes the risk classification framework. Most AI agents deployed in enterprise settings fall into the "high-risk" category due to their autonomous decision-making capabilities and the domains they operate in.

Unacceptable risk (prohibited)

AI systems that manipulate human behavior, exploit vulnerabilities, or perform real-time biometric identification in public spaces. AI agents that perform social scoring or subliminal manipulation are banned outright.

High risk (Art. 6 — most enterprise AI agents)

AI systems used in critical infrastructure, education, employment, essential services, law enforcement, or migration management. Most enterprise AI agents qualify because they make or influence decisions about access to services, financial transactions, employment processes, or healthcare delivery.

High-risk classification triggers the heaviest obligations: risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity requirements (Articles 9-15).

Limited risk (transparency obligations)

AI systems that interact with humans (chatbots), generate synthetic content (deepfakes), or perform emotion recognition. Must disclose AI involvement to users. Many customer-facing AI agents fall here when they don't make high-stakes decisions.

Minimal risk (no specific obligations)

AI systems with minimal societal impact. Spam filters, AI-powered games, inventory management. No mandatory requirements beyond general product safety laws.

Article-by-Article requirements and how Veto maps to them

For high-risk AI systems, the EU AI Act mandates specific controls. Here's how each requirement applies to AI agents, and how Veto's runtime authorization satisfies the obligation.

Article 9 — Risk management system

Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. The system must identify and analyze known and reasonably foreseeable risks, estimate and evaluate risks, and adopt suitable risk management measures.

How Veto satisfies Art. 9

  • Risk identification: Policy-as-code requires you to enumerate every tool and define allowed/denied actions — this is risk identification by construction
  • Risk estimation: Decision logs quantify risk exposure — how often agents attempt blocked actions, approval rates, escalation frequency
  • Risk management measures: Runtime authorization is itself a risk management measure — it prevents identified risks from materializing
  • Lifecycle coverage: Policies evolve with version control, environment scoping (dev/staging/prod), and audit trails for every change

Article 14 — Human oversight

High-risk AI systems must be designed to be effectively overseen by natural persons during the period of use. Oversight measures must enable the individual to fully understand the system's capacities and limitations, to monitor operation, and to intervene or interrupt the system.

How Veto satisfies Art. 14

  • Human-in-the-loop: Approval workflows route sensitive actions to human reviewers before execution — direct human oversight at the action level
  • Real-time monitoring: Dashboard shows live agent activity, pending approvals, and recent decisions — full operational visibility
  • Intervention capability: Policies can be updated in real-time to restrict or halt agent behavior. No redeployment needed.
  • Comprehension: Declarative YAML policies are human-readable — auditors and overseers can understand exactly what an agent is permitted to do

Article 26 — Obligations of deployers

Deployers of high-risk AI systems must implement appropriate technical and organizational measures to ensure they use such systems in accordance with the instructions of use, monitor operation, keep logs generated by the system, and ensure human oversight by persons with necessary competence, training, and authority.

How Veto satisfies Art. 26

  • Instructions of use: Policies define the permitted envelope of agent behavior — this is the deployer's implementation of the provider's instructions
  • Log retention: Every authorization decision is logged with tool, arguments, policy, outcome, and timestamp. Exportable for regulatory review.
  • Monitoring: Real-time dashboard and alerting on policy violations, unusual patterns, and escalation events
  • Competent oversight: Role-based access to the Veto dashboard ensures only authorized personnel can modify policies or approve actions

Article 52 — Transparency obligations

Providers must ensure that AI systems intended to interact with natural persons are designed and developed so that the natural person is informed they are interacting with an AI system, unless this is obvious from the context.

How Veto supports Art. 52

  • Audit trails: Decision logs prove exactly what the AI agent did, when, and under what policy — enabling transparent reporting to affected individuals
  • Policy documentation: Version-controlled YAML policies serve as documentation of the AI system's intended behavior and constraints

The AI Pact and early compliance

The European Commission launched the AI Pact in November 2023, inviting organizations to voluntarily commit to applying the AI Act's principles before legal enforcement deadlines. Over 100 companies signed the initial pledges. For organizations deploying AI agents, joining the AI Pact or aligning with its commitments signals regulatory readiness and builds trust with EU customers and regulators.

Veto enables AI Pact compliance by providing the technical controls that back the voluntary commitments: risk management, human oversight, transparency, and record-keeping. Policies can be implemented immediately without waiting for full enforcement timelines.

Enforcement timeline

August 1, 2024AI Act enters into force
February 2, 2025Prohibitions on unacceptable-risk AI systems apply
August 2, 2025Obligations for general-purpose AI models apply; governance structures must be established
August 2, 2026Full enforcement — all provisions including high-risk AI obligations, deployer obligations, penalties
August 2, 2027Extended deadline for high-risk AI systems in Annex I (regulated products)

Penalties for non-compliance

The EU AI Act imposes significant fines for non-compliance, scaled by company revenue:

35M / 7%

Up to 35 million euros or 7% of global annual turnover for prohibited AI practices

15M / 3%

Up to 15 million euros or 3% of global turnover for high-risk AI non-compliance

7.5M / 1%

Up to 7.5 million euros or 1% of global turnover for providing incorrect information

Frequently asked questions

Are AI agents considered high-risk under the EU AI Act?
Most enterprise AI agents that make or influence decisions in areas like finance, healthcare, employment, or essential services qualify as high-risk under Article 6 and Annex III. The key factor is whether the agent's actions affect access to services, financial outcomes, or individual rights. If your agent processes transactions, manages claims, or handles personal data, it's likely high-risk.
When does the EU AI Act apply to companies outside the EU?
The AI Act applies to any provider or deployer of AI systems that are placed on the market or put into service in the EU, regardless of where the provider is established. It also applies when the output produced by the AI system is used in the EU. If your AI agents serve EU customers or process EU citizen data, you're in scope.
What does Article 14 (human oversight) require for AI agents?
Article 14 requires that high-risk AI systems can be effectively overseen by natural persons during operation. For AI agents, this means humans must be able to monitor what the agent is doing, understand its decisions, intervene to stop or modify behavior, and override automated decisions. Veto's approval workflows and real-time dashboard directly satisfy these requirements.
How do I document compliance for the EU AI Act?
The AI Act requires technical documentation (Art. 11), record-keeping (Art. 12), and quality management systems (Art. 17). Veto provides version-controlled policies (documentation), immutable decision logs (record-keeping), and policy testing and monitoring (quality management). These exports are structured for regulatory review.
What is the AI Pact and should I sign it?
The AI Pact is a voluntary commitment framework from the European Commission that invites organizations to apply AI Act principles before enforcement deadlines. Signing demonstrates regulatory readiness and good faith. If you're already deploying AI agents with runtime authorization, you likely meet the Pact's commitments and signing is a low-cost signal of compliance maturity.

Related compliance resources

Full enforcement begins August 2026. Start building compliance now.