What Is the EU AI Act? Risk Classification Explained
Director & MLOps Lead
Predictive maintenance specialist, industrial data analysis, vibration-based condition monitoring, applied AI for manufacturing and automotive operations

What Is the EU AI Act? Risk Classification Explained
The EU AI Act (Regulation 2024/1689) entered into force in August 2024 as the world's first comprehensive legal framework governing artificial intelligence. It applies to any AI system placed on the EU market or used within the EU, regardless of the provider's location. Non-compliance penalties reach up to 35 million euros or 7% of global annual turnover for the most serious violations. Understanding the risk classification framework is the starting point for any compliance program.
What Is the EU AI Act's Four-Tier Risk Framework?
The EU AI Act classifies all AI systems into four risk tiers. Each tier carries different compliance obligations. According to the European Commission (2024), approximately 5-15% of AI systems in the EU market fall into the high-risk category requiring mandatory conformity assessment. The classification is based on the potential harm an AI system can cause to health, safety, fundamental rights, democracy, and the rule of law.
Tier 1: Unacceptable Risk (Prohibited)
Prohibited AI practices are banned outright with no compliance pathway. As of February 2025, eight categories are prohibited: subliminal manipulation; exploitation of vulnerable groups; real-time biometric identification in public spaces by law enforcement (with limited exceptions); biometric categorization to infer sensitive attributes (race, politics, religion, sexual orientation, union membership); social scoring by public authorities; predictive policing based solely on profiling; untargeted biometric data scraping for facial recognition databases; and emotion recognition in workplaces or educational institutions.
Tier 2: High Risk (Conformity Assessment Required)
High-risk AI systems require conformity assessment, technical documentation, human oversight mechanisms, and post-market monitoring before deployment. Two pathways define high-risk classification. Annex I covers AI used as safety components in products already regulated under EU product safety law (medical devices, machinery, vehicles, aviation equipment). Annex III covers eight categories of standalone high-risk AI: biometric identification; critical infrastructure; education; employment; access to essential services; law enforcement; migration and border control; and administration of justice. High-risk requirements apply from August 2026.
Tier 3: Limited Risk (Transparency Obligations)
Limited-risk systems, primarily chatbots and AI-generated content tools, face transparency obligations only. Providers and deployers must ensure users know they are interacting with an AI system. Deepfakes and AI-generated media must be labeled as such. These requirements applied from August 2025. No conformity assessment or technical documentation is mandated, though providers are encouraged to adopt voluntary codes of conduct.
Tier 4: Minimal Risk (No Mandatory Requirements)
AI systems posing minimal risk, including AI-enabled spam filters, AI in video games, and certain recommendation systems, have no mandatory requirements under the Act. However, providers can voluntarily commit to codes of conduct developed by the AI Office. The AI Office estimates that approximately 80% of all AI systems will fall into the minimal or limited risk categories.
What Are the Key EU AI Act Deadlines?
The EU AI Act applies in phases rather than all at once. The European Commission's implementation timeline is: February 2025: prohibited practice bans take effect; August 2025: GPAI model provisions and governance structure apply; August 2026: high-risk AI requirements and conformity assessment obligations apply; August 2027: transition period ends for AI systems in regulated products already placed on the market. Organizations should map their AI portfolio against these deadlines immediately, as February 2025 deadlines have already passed.
Need expert help with what is the eu ai act? risk classification explained?
Our cloud architects can help you with what is the eu ai act? risk classification explained — from strategy to implementation. Book a free 30-minute advisory call with no obligation.
What Are General Purpose AI (GPAI) Model Rules?
The EU AI Act introduced specific provisions for General Purpose AI models, covering foundation models like GPT-4, Claude, and Gemini that can perform diverse tasks. All GPAI providers must maintain technical documentation, comply with EU copyright law, and publish training data summaries. GPAI models with systemic risk, defined as trained on compute exceeding 10^25 FLOPs, face additional requirements: adversarial testing (red-teaming), incident reporting to the AI Office, cybersecurity protection measures, and energy consumption reporting. The systemic risk tier currently applies to the most powerful frontier models.
How Does Risk Classification Work in Practice?
Risk classification is not automatic. Organizations must assess each AI system against the Annex I and Annex III criteria, document their classification rationale, and implement compliance measures appropriate to the determined risk tier. The classification must be reviewed when the AI system is significantly modified. A change in intended use can move a system from minimal risk to high risk. For example, a recommendation engine used for entertainment is minimal risk, but the same system used to determine access to healthcare services is high risk under Annex III.
[UNIQUE INSIGHT]: The most common misclassification we encounter is treating AI systems that make automatic decisions about individual benefits, credit, or employment as minimal risk because they include a human approval step. The EU AI Act's high-risk classification under Annex III applies to the intended purpose of the AI system, not its workflow position. An AI credit scoring system used for loan decisions is high-risk even if a human approves every output.What Are the Penalties for Non-Compliance?
The EU AI Act's penalty framework scales by infringement type. Violations of prohibited AI practice bans: up to 35 million euros or 7% of global annual turnover, whichever is higher. Violations of high-risk AI obligations or GPAI requirements: up to 15 million euros or 3% of global turnover. Providing false or misleading information to authorities: up to 7.5 million euros or 1.5% of global turnover. For SMEs and startups, penalties are proportionately capped. National market surveillance authorities enforce the Act, with coordination by the EU AI Office established within the European Commission.
Frequently Asked Questions
Does the EU AI Act apply to non-EU companies?
Yes. The Act uses an effects-based jurisdiction approach: it applies to any provider placing an AI system on the EU market, any deployer using an AI system in the EU, and any provider or deployer established outside the EU whose AI system output is used within the EU. Non-EU companies must comply and must designate an EU-established representative for high-risk AI systems if they are not established in the EU.
Is existing AI software automatically covered by the EU AI Act?
AI systems already deployed before August 2026 have a transition period until August 2027 to achieve compliance with high-risk requirements, provided no substantial modifications are made. Systems placed on the market or put into service after August 2026 must comply before deployment. The prohibited practice bans applied from February 2025 with no transition relief, meaning organizations should have already reviewed systems for prohibited practices.
Where can organizations find official EU AI Act guidance?
The EU AI Office publishes official guidance, standardization requests, and the GPAI Code of Practice at digital-strategy.ec.europa.eu. The full regulation text is available in the EU Official Journal (OJ L 2024/1689). ENISA publishes technical guidance on AI security and compliance. CEN-CENELEC is developing harmonized standards under a mandate from the European Commission, which will create presumption of conformity when published.
target: /blog/ai-governance-framework-eu-ai-act/ --> target: /ai-consulting-services/ -->About the Author

Director & MLOps Lead at Opsio
Predictive maintenance specialist, industrial data analysis, vibration-based condition monitoring, applied AI for manufacturing and automotive operations
Editorial standards: This article was written by a certified practitioner and peer-reviewed by our engineering team. We update content quarterly to ensure technical accuracy. Opsio maintains editorial independence — we recommend solutions based on technical merit, not commercial relationships.