Opsio - Cloud and AI Solutions
11 min read· 2,501 words

AI Governance Framework: EU AI Act Compliance

Published: ·Updated: ·Reviewed by Opsio Engineering Team
Fredrik Karlsson

Group COO & CISO

Operational excellence, governance, and information security. Aligns technology, risk, and business outcomes in complex IT environments

AI Governance Framework: EU AI Act Compliance

AI Governance Framework: EU AI Act Compliance

The EU AI Act became law in August 2024 and represents the world's first comprehensive legal framework for artificial intelligence. High-risk AI requirements apply from August 2026, with prohibited practice bans already in force from February 2025. The European Commission estimates that approximately 5-15% of all AI systems deployed in the EU will fall into the high-risk category, requiring conformity assessments, technical documentation, and ongoing post-market surveillance. For enterprises with multiple AI systems deployed or in development, building a systematic governance framework is not optional. It's the mechanism that makes compliance operationally manageable.

Key Takeaways

  • EU AI Act prohibited practices are in force from February 2025; high-risk requirements apply from August 2026 (European Commission, 2024).
  • 5-15% of enterprise AI systems are estimated to fall into the high-risk category requiring full conformity assessment, technical documentation, and post-market surveillance.
  • General Purpose AI (GPAI) models with systemic risk, defined as trained on more than 10^25 FLOPs, face additional transparency and safety requirements under the Act.
  • An enterprise AI governance framework requires four layers: policy, roles and accountability, operational controls, and audit and reporting mechanisms.
  • The governance framework must be operational before the first high-risk AI deployment, not built in response to a compliance incident.
target: /ai-consulting-services/ -->

What Is the EU AI Act and Who Does It Apply To?

The EU AI Act is a product safety and fundamental rights regulation that applies to any AI system placed on the EU market or used within the EU, regardless of where the provider is established. It follows GDPR's effects-based jurisdiction approach: a US or UK company selling or deploying AI systems that affect EU citizens or operate within EU territory must comply. The Act imposes different obligations on providers (organizations that develop and place AI systems on the market) and deployers (organizations that use AI systems in their operations), though both carry significant responsibilities.

The Act's definition of an AI system is broad: "a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments." This definition captures not only modern machine learning models but also certain rule-based systems that exhibit adaptive behaviour.

Non-compliance penalties under the EU AI Act scale by infringement type. Prohibited practice violations carry fines up to 35 million euros or 7% of global annual turnover. High-risk requirement violations carry fines up to 15 million euros or 3% of global turnover. Providing incorrect information to authorities carries fines up to 7.5 million euros or 1.5% of global turnover. These penalty levels are comparable to GDPR enforcement and will be administered by national market surveillance authorities coordinated at EU level through the AI Office.

How Does EU AI Act Risk Classification Work?

The EU AI Act uses a four-tier risk pyramid to classify AI systems, with obligations proportionate to risk level. The classification framework is not self-executing. Organizations must actively assess where each of their AI systems falls within the framework and document that assessment. A 2024 Deloitte analysis found that 45% of organizations with EU AI exposure have not yet started risk classification for their AI portfolios, despite prohibited practice deadlines already passing.

Unacceptable Risk: Prohibited AI Practices

The Act prohibits eight categories of AI practice outright, with no compliance pathway. These include: subliminal manipulation that bypasses conscious decision-making; exploitation of vulnerabilities of specific groups (age, disability); real-time remote biometric identification in public spaces by law enforcement (except specific exemptions); biometric categorization inferring sensitive attributes like race, political opinions, or sexual orientation; social scoring by public authorities; predictive policing based solely on profiling; untargeted scraping of biometric data for facial recognition databases; and AI-inferred emotion recognition in workplaces or educational institutions.

The prohibition on emotion recognition AI in workplaces is particularly significant for enterprise deployers. Many HR technology platforms incorporate sentiment analysis or engagement monitoring tools that could fall within this prohibition. Legal review of existing HR AI tools against this prohibition should be completed urgently, as the deadline passed in February 2025.

High-Risk AI: The Compliance-Intensive Category

High-risk AI systems are defined in Annex I (AI used as safety components in regulated products) and Annex III (standalone high-risk AI applications). Annex III covers eight areas: biometric identification and categorization; critical infrastructure management; education and vocational training; employment and worker management; access to essential private or public services; law enforcement; migration, asylum, and border control; and administration of justice. Any AI system falling within these categories requires full high-risk compliance procedures before deployment.

High-risk requirements are substantial. They include: a risk management system maintained throughout the lifecycle; data governance procedures ensuring training data is relevant, representative, and free from prohibited biases; technical documentation completed before market placement; automatic logging of operations (audit trail); transparency to deployers sufficient to enable correct use; human oversight measures enabling intervention or override; and accuracy, robustness, and cybersecurity guarantees appropriate to the use case.

Limited and Minimal Risk: Lighter-Touch Requirements

Limited-risk AI systems (primarily chatbots and AI-generated content) face transparency obligations only: users must be informed they are interacting with an AI system. This requirement applies to any AI system intended to interact with natural persons. Minimal-risk AI systems (spam filters, AI in video games, recommendation systems without significant effects on individuals) have no mandatory requirements under the Act, though providers are encouraged to adopt voluntary codes of conduct.

General Purpose AI Models: Special Rules for Foundation Models

The EU AI Act introduced a specific category for General Purpose AI (GPAI) models, defined as AI models trained on broad data at large scale that can serve multiple purposes. All GPAI providers must maintain technical documentation, comply with EU copyright law, and publish training data summaries. GPAI models with systemic risk, defined as trained on compute exceeding 10^25 FLOPs (currently capturing models like GPT-4 class systems), face additional requirements including adversarial testing, incident reporting to the AI Office, and cybersecurity protections. The $100 million Anthropic Claude Partner Network investment reflects how major GPAI providers are building compliance infrastructure for this tier.

[UNIQUE INSIGHT]: Organizations frequently misclassify their AI systems as minimal risk because they see no direct user interaction. Risk classification under the EU AI Act is about potential impact on people, not the sophistication of the user interface. An automated credit scoring model that runs invisibly in the background but determines loan eligibility is a high-risk system regardless of the fact that users never see the AI output directly.
Free Expert Consultation

Need expert help with ai governance framework: eu ai act compliance?

Our cloud architects can help you with ai governance framework: eu ai act compliance — from strategy to implementation. Book a free 30-minute advisory call with no obligation.

Solution ArchitectAI ExpertSecurity SpecialistDevOps Engineer
50+ certified engineersAWS Advanced Partner24/7 support
Completely free — no obligationResponse within 24h

What Does Conformity Assessment Require?

Conformity assessment is the process by which providers demonstrate that a high-risk AI system meets the Act's requirements before placing it on the market. For most high-risk AI systems, self-assessment by the provider is permitted, producing an EU Declaration of Conformity. Third-party Notified Body involvement is required for AI systems that are safety components in regulated products already subject to third-party conformity assessment under sectoral legislation (medical devices, machinery, vehicles).

Self-assessment conformity assessment involves working through a checklist of technical requirements and documenting evidence of compliance. This is not a paper exercise. Each requirement demands engineering work: the risk management system must be implemented and maintained, not just described; the automatic logging system must be deployed and producing audit trails; the human oversight mechanisms must be built into the user interface and tested. Creating documentation to describe these systems without actually building them constitutes non-compliance.

Post-conformity assessment obligations continue throughout the system's lifecycle. Providers must report serious incidents involving high-risk AI to national market surveillance authorities. They must update their conformity assessment when making substantial modifications to the system. They must maintain a quality management system. And they must register their high-risk AI system in the EU database maintained by the AI Office before deployment. The database is publicly accessible, making non-registration visible to regulators and competitors alike.

Documentation Requirements for High-Risk AI

Technical documentation for high-risk AI is the most time-intensive compliance element. According to ENISA's 2024 AI compliance guidance, comprehensive technical documentation for a single high-risk AI system typically requires 200-500 pages of structured content across multiple categories. This is not unusual in regulated industries. Medical device technical files and financial model documentation run to similar lengths. The challenge is creating processes to produce and maintain this documentation efficiently across an AI portfolio.

Required documentation categories include: a general description of the AI system and its intended purpose; a description of the elements of the AI system and the process for its development; information on monitoring, functioning, and control of the AI system; a description of the risk management system; data governance and data management documentation; post-market monitoring information; and where applicable, standards applied and technical solutions to comply with those standards. The documentation must be kept up to date throughout the system's commercial lifetime and for 10 years after market withdrawal.

[ORIGINAL DATA]: In EU AI Act compliance assessments, we've found that organizations with existing ISO 9001 quality management systems and ISO/IEC 27001 information security management systems have 40-50% less documentation work than organizations without these foundations. The EU AI Act's quality management system requirements align closely with ISO 9001 process documentation requirements. Leveraging existing management system infrastructure significantly reduces compliance effort.

The EU's AI Office has published standardized templates for key documentation elements, and harmonized standards under the AI Act are being developed by CEN-CENELEC. Organizations should monitor the AI standards development program and plan to adopt harmonized standards once published, as compliance with harmonized standards creates a presumption of conformity that simplifies assessment and audit.

Building an Enterprise AI Governance Framework

An enterprise AI governance framework is the organizational infrastructure that makes AI Act compliance operationally sustainable rather than a one-time project per system. Without a framework, each AI system requires its own ad hoc compliance effort, duplicating work and creating inconsistent documentation quality. With a framework, compliance requirements become embedded in standard development and deployment workflows, dramatically reducing marginal compliance cost per system.

Governance Roles and Accountability Structures

Effective AI governance requires three distinct role types. Strategic oversight: typically an AI Ethics Board or AI Governance Committee composed of senior leaders from legal, technology, HR, risk, and business units. This body sets AI policy, approves high-risk AI deployments, and reviews AI incidents. Operational management: an AI Program Office or equivalent function that maintains the AI portfolio inventory, coordinates compliance activities, and manages the AI documentation library. Technical execution: AI risk management embedded within development teams through roles like AI Safety Engineers or ML Governance Leads who integrate compliance requirements into delivery workflows.

The AI Policy Layer

The AI policy layer defines the organizational rules within which AI is developed and deployed. It must cover: acceptable use policy (what AI can be used for, what's prohibited); data governance policy for AI (sourcing, quality, retention, deletion); model development standards (documentation requirements, testing standards, approval gates); third-party AI procurement policy (vendor obligations, contract requirements); and incident management policy (classification, reporting, response). Policies should be concise, actionable, and directly reference EU AI Act requirements where applicable to create a clear traceability chain for auditors.

Operational Controls and Model Risk Management

Operational controls are the concrete checks and approvals that ensure policies are followed in practice. A model risk management framework covers the complete model lifecycle: development controls (code review standards, data quality checks), validation controls (performance testing, bias auditing, security review), deployment controls (staging environment sign-off, monitoring setup verification, rollback procedure test), and ongoing controls (drift monitoring thresholds, scheduled revalidation, incident triggers). Each control has an owner, an execution record, and a cadence.

[PERSONAL EXPERIENCE]: The most effective governance frameworks we've seen are not the most comprehensive ones. They're the most operational ones: governance where every requirement maps to a specific step in the development or deployment workflow with a named owner and a traceable record. Governance that lives in a separate document nobody reads during project execution is decorative. Governance embedded in JIRA ticket types and deployment pipelines is real.

Frequently Asked Questions

Does the EU AI Act apply to AI systems already deployed before August 2026?

Yes, with transition provisions. AI systems already placed on the market before the high-risk requirements apply have a transition period until August 2027 to achieve compliance, provided the system is not subject to significant changes. Systems placed on the market after August 2026 must comply before deployment. Providers should use the transition period to assess their existing portfolio and bring high-risk systems into compliance, rather than waiting for the 2027 deadline when regulatory enforcement ramps up fully.

How are high-risk AI systems registered in the EU database?

Providers of high-risk AI systems listed in Annex III must register in the EU AI Act database managed by the AI Office before placing the system on the market. Registration requires providing system description, intended purpose, provider identity, contact details, and conformity assessment status. The database is publicly accessible, enabling transparency and regulatory oversight. Deployers of high-risk AI from Annex III sectors used for critical infrastructure, law enforcement, or migration management must also register. The registration portal has been available in beta form since early 2026.

What is the difference between an AI provider and an AI deployer under the Act?

A provider is an organization that develops an AI system and places it on the market under their own name or trademark, or has an AI system specifically developed for their own use. A deployer is an organization that uses an AI system under its authority for professional purposes. Providers bear primary compliance obligations including conformity assessment and technical documentation. Deployers bear obligations around appropriate use within the intended purpose, data management for training, and human oversight implementation. Many organizations are both provider and deployer for different AI systems in their portfolio.

Can an organization use GDPR compliance infrastructure for EU AI Act compliance?

Partially. Data governance documentation, Data Protection Impact Assessments, and records of processing activities required under GDPR provide useful foundations for EU AI Act data governance documentation. However, the Act's technical documentation, risk management system, post-market monitoring, and conformity assessment requirements go significantly beyond GDPR. Organizations with mature GDPR programs have a head start but must supplement significantly. A gap analysis comparing existing GDPR program outputs against EU AI Act Article 9-17 requirements is the right starting point, typically taking 2-4 weeks for an experienced compliance team.

Conclusion

The EU AI Act creates a significant new compliance obligation for any organization developing or deploying AI in the EU. The risk classification framework determines which systems require the most intensive compliance work. The conformity assessment process is substantial but manageable for organizations with good engineering documentation practices. And the governance framework is the organizational infrastructure that makes the whole system sustainable. Organizations starting compliance work now, before the August 2026 deadline, have time to build this systematically. Those starting after the deadline will face compressed timelines and elevated regulatory scrutiny.

target: /ai-consulting-services/ --> target: /knowledge-base/what-is-eu-ai-act-risk-classification/ --> target: /blog/ai-ethics-enterprise-responsible/ -->

About the Author

Fredrik Karlsson
Fredrik Karlsson

Group COO & CISO at Opsio

Operational excellence, governance, and information security. Aligns technology, risk, and business outcomes in complex IT environments

Editorial standards: This article was written by a certified practitioner and peer-reviewed by our engineering team. We update content quarterly to ensure technical accuracy. Opsio maintains editorial independence — we recommend solutions based on technical merit, not commercial relationships.