In This Guide
- The EU AI Act uses a four-tier risk classification: Prohibited, High-Risk, Limited-Risk, and Minimal-Risk — each with fundamentally different compliance obligations
- Prohibited AI practices (Article 5) are banned outright and have been enforceable since February 2025 — violations carry the highest penalties (up to €35M or 7% of global turnover)
- High-risk classification is driven by the use case (Annex I product safety or Annex III listed domains), not by the technical sophistication of the AI model
- The Article 6(3) exception allows certain Annex III AI systems to escape high-risk classification if they do not pose a significant risk of harm — but this must be documented and notified
- Misclassification is one of the most common and consequential EU AI Act compliance failures — it can result in under-compliance or unnecessary over-investment
The Risk-Based Approach
The EU AI Act (Regulation 2024/1689) is built on a risk-based regulatory philosophy. Rather than applying uniform rules to all AI systems, the Act calibrates obligations according to the level of risk an AI system poses to health, safety, and fundamental rights. This approach recognizes that a spam filter and a criminal sentencing recommendation engine are fundamentally different in their potential impact, and regulates them accordingly.
The four risk tiers form a pyramid:
- Prohibited (Unacceptable Risk) — Banned completely. No compliance pathway; these AI practices must not exist.
- High-Risk — Permitted but heavily regulated. Must meet stringent requirements under Chapter 2 (Articles 8-15) and undergo conformity assessment before market placement.
- Limited Risk — Permitted with transparency obligations. Must disclose AI nature to users and comply with specific information requirements.
- Minimal Risk — Permitted with no mandatory obligations. Voluntary codes of conduct encouraged.
Getting classification right is the single most important step in your EU AI Act compliance journey. Everything downstream — your compliance obligations, resource allocation, timeline, and audit preparation — flows directly from how you classify each AI system. An incorrect classification has compounding effects: classify a high-risk system as minimal-risk, and you miss every mandatory requirement. Classify a minimal-risk system as high-risk, and you waste significant resources on unnecessary compliance work.
Obligations by Risk Tier — Summary
| Risk Tier | Mandatory Obligations | Conformity Assessment | Maximum Penalty |
|---|---|---|---|
| Prohibited | Cease and desist — practice must not exist | N/A — practice is banned | €35M or 7% of global turnover |
| High-Risk | Risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity, QMS, post-market monitoring | Required (self-assessment or third-party per Art. 43) | €15M or 3% of global turnover |
| Limited Risk | Transparency disclosures (AI interaction notice, content labelling, emotion detection notice) | Not required | €15M or 3% of global turnover |
| Minimal Risk | AI literacy (Art. 4) only; voluntary codes of conduct | Not required | €7.5M or 1% (for false information to authorities) |
Prohibited AI Practices (Article 5)
Article 5 of the EU AI Act defines AI practices that are considered to pose an unacceptable risk to fundamental rights and are therefore banned entirely. These prohibitions have been in force since February 2, 2025. Any organization operating these practices in connection with the EU is already in violation.
The Eight Prohibited Practices
1. Subliminal Manipulation (Article 5(1)(a))
AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting behaviour in a way that causes or is reasonably likely to cause significant harm.
Example: An AI system embedded in a mobile game that uses subliminal visual or audio patterns to influence children's purchasing behaviour without their awareness.
2. Exploitation of Vulnerabilities (Article 5(1)(b))
AI systems that exploit vulnerabilities of a person or a specific group of persons due to their age, disability, or a specific social or economic situation, to materially distort behaviour in a way likely to cause significant harm.
Example: An AI-powered payday lending system that targets elderly individuals with cognitive impairments, using their vulnerability to push them into unfavourable financial products.
3. Social Scoring (Article 5(1)(c))
AI systems for evaluating or classifying natural persons based on their social behaviour or personal characteristics, where the resulting social score leads to detrimental or unfavourable treatment in social contexts unrelated to the data collection context, or treatment that is disproportionate to the social behaviour.
Example: A government system that rates citizens based on their social media activity, purchasing behaviour, and social connections, and uses the score to deny public services or restrict travel.
4. Predictive Policing Based Solely on Profiling (Article 5(1)(d))
AI systems that make risk assessments of natural persons to predict the risk of criminal offending, based solely on profiling or personality traits. This does not prohibit AI systems that support human assessment based on objective, verifiable facts linked to criminal activity.
Example: An AI system that profiles individuals based on their neighbourhood, demographic characteristics, and social associations to predict criminal behaviour without any objective factual basis.
5. Untargeted Facial Recognition Database Scraping (Article 5(1)(e))
AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
Example: A technology company scraping billions of social media profile photos to build a facial recognition database without targeted consent or lawful basis.
6. Emotion Recognition in Workplace and Education (Article 5(1)(f))
AI systems that infer emotions of natural persons in the areas of workplace and educational institutions, except where the AI system is intended to be placed on the market or put into service for medical or safety reasons.
Example: An AI system that monitors employee facial expressions during video calls to assess engagement, stress levels, or attitude — unless used for genuine occupational health and safety monitoring.
7. Biometric Categorization by Sensitive Attributes (Article 5(1)(g))
AI systems that categorize natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This does not apply to labelling or filtering of lawfully acquired biometric datasets or to categorization by law enforcement.
Example: An AI system that analyzes facial features to classify individuals by ethnicity or infer religious beliefs for targeting purposes.
8. Real-Time Remote Biometric Identification in Public Spaces (Article 5(1)(h))
Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes — with three narrow exceptions: targeted search for specific victims (abduction, trafficking, sexual exploitation); prevention of a specific, substantial, and imminent threat to life or a genuine and foreseeable terrorist threat; and identification of a suspect of a serious criminal offence (per the specific list in the Act).
Example: A city-wide CCTV network using real-time facial recognition to identify and track individuals in public streets for general surveillance purposes.
If any AI system in your inventory matches a prohibited practice, it must be decommissioned immediately. There is no grace period — these prohibitions have been in force since February 2025. Document the decommissioning decision, the system that was stopped, and the date of cessation for your compliance records.
High-Risk AI Systems
High-risk AI systems are the core regulatory focus of the EU AI Act. They are permitted but subject to the most demanding compliance obligations — a comprehensive set of requirements in Chapter 2 that must be satisfied before the system can be placed on the EU market or put into service.
Two Pathways to High-Risk Classification
An AI system is classified as high-risk through two routes defined in Article 6:
Route 1: Annex I — Product Safety Component (Article 6(1))
An AI system is high-risk if it is intended to be used as a safety component of a product, or is itself a product, covered by the EU harmonization legislation listed in Annex I. These include:
- Machinery and machinery products (Regulation 2023/1230)
- Toys (Directive 2009/48/EC)
- Recreational craft and personal watercraft (Directive 2013/53/EU)
- Lifts (Directive 2014/33/EU)
- Equipment for potentially explosive atmospheres (Directive 2014/34/EU)
- Radio equipment (Directive 2014/53/EU)
- Pressure equipment (Directive 2014/68/EU)
- Cableway installations (Regulation 2016/424)
- Personal protective equipment (Regulation 2016/425)
- Gas appliances (Regulation 2016/426)
- Medical devices (Regulation 2017/745)
- In vitro diagnostic medical devices (Regulation 2017/746)
- Civil aviation safety (Regulation 2018/1139)
- Motor vehicles (Regulation 2019/2144)
- Agricultural and forestry vehicles (Regulation 167/2013)
- Marine equipment (Directive 2014/90/EU)
- Rail system interoperability (Directive 2016/797)
For Annex I systems, the AI Act requirements are typically integrated into the existing product conformity assessment under the respective product legislation. The full requirements apply from August 2, 2027 for most Annex I systems.
Route 2: Annex III — Standalone High-Risk (Article 6(2))
An AI system is high-risk if it falls into one of the eight domain categories listed in Annex III. These are standalone AI systems (not necessarily embedded in products) that operate in sensitive areas where the risk to health, safety, or fundamental rights is deemed significant. Full requirements apply from August 2, 2026.
The Article 6(3) Exception
An AI system referred to in Annex III shall not be considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. This exception applies when the AI system meets any of these conditions:
- It is intended to perform a narrow procedural task
- It is intended to improve the result of a previously completed human activity
- It is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review
- It is intended to perform a preparatory task to an assessment relevant for the purposes of the Annex III use cases
The provider must document the assessment and notify the relevant national authority before placing the system on the market. The exception does not apply if the AI system performs profiling of natural persons (per GDPR Article 4(4)).
Annex III Categories Deep-Dive
Annex III lists eight domain categories. Understanding each category in detail is essential for accurate classification.
Category 1: Biometrics
Scope: Remote biometric identification systems (not real-time in public spaces for law enforcement — those are prohibited). Biometric categorization systems using sensitive or protected attributes. Emotion recognition systems (outside workplaces and education, which are prohibited).
Examples: Airport facial recognition for boarding verification. Border control fingerprint matching. Biometric access control in high-security facilities.
Note: Biometric identification for identity verification in a purely 1:1 matching scenario (such as unlocking a phone) is generally not high-risk. The high-risk designation applies primarily to 1:many identification and categorization systems.
Category 2: Critical Infrastructure
Scope: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity.
Examples: AI controlling power grid load balancing. Predictive maintenance AI for water treatment systems. AI traffic management systems controlling traffic light sequencing. AI monitoring digital infrastructure availability (where it is a safety component).
Category 3: Education and Vocational Training
Scope: AI systems used to determine access to or admission to educational and vocational training institutions. AI used to evaluate learning outcomes, including those used to steer the learning process. AI used to monitor prohibited behaviour during tests (proctoring).
Examples: AI-powered university admission screening. Automated essay grading systems. AI proctoring tools that monitor students during examinations. AI systems that recommend educational pathways based on student performance analysis.
Category 4: Employment, Workers Management, and Access to Self-Employment
Scope: AI used in recruitment (placing targeted job advertisements, analyzing applications, filtering candidates, evaluating candidates). AI used to make decisions affecting employment relationships (promotion, termination, task allocation, performance monitoring, evaluation).
Examples: AI CV screening and candidate ranking systems. AI-powered video interview analysis. AI systems that monitor employee productivity. AI tools that recommend promotion or termination decisions. Algorithmic management systems for gig workers.
Category 5: Access to and Enjoyment of Essential Private and Public Services
Scope: AI used by public authorities to evaluate eligibility for benefits, services, or to grant, reduce, revoke, or reclaim such benefits. AI used to evaluate creditworthiness (except for detecting financial fraud). AI used in risk assessment and pricing for life and health insurance. AI used to evaluate and classify emergency calls, including prioritization of dispatch.
Examples: AI credit scoring models. AI systems assessing insurance risk for health or life policies. AI prioritizing emergency dispatch calls. AI systems determining social benefits eligibility.
Category 6: Law Enforcement
Scope: AI used as polygraph or similar tools to detect deception. AI used to assess the risk of a person becoming a victim or the risk of criminal offending (except based solely on profiling — that is prohibited). AI used to profile individuals in criminal investigations. AI used for crime analytics.
Examples: AI-assisted evidence analysis in criminal investigations. Predictive policing tools that use objective factual data (not pure profiling). AI lie detection systems.
Category 7: Migration, Asylum, and Border Control
Scope: AI used as polygraph or similar tools in migration interviews. AI used to assess risks posed by persons entering or seeking to enter the EU. AI used to examine and assess applications for asylum, visa, or residence permits. AI used for border surveillance and detection of irregular border crossing.
Examples: AI-powered document verification in visa processing. AI risk assessment for travellers at border control. AI analysis of asylum applications for credibility assessment.
Category 8: Administration of Justice and Democratic Processes
Scope: AI used to assist judicial authorities in researching and interpreting facts and law and in applying the law to a concrete set of facts. AI used to influence the outcome of an election or referendum or the voting behaviour of persons (not including AI used for organizational or logistical purposes).
Examples: AI-powered legal research tools that recommend case outcomes to judges. AI systems that generate sentencing recommendations. AI tools used in election campaign targeting that attempt to influence voting behaviour.
Limited-Risk: Transparency Obligations
AI systems categorized as limited-risk are not subject to the full Chapter 2 requirements but must comply with specific transparency obligations under Article 50. These obligations ensure that persons interacting with AI systems or exposed to AI-generated content are appropriately informed.
Transparency Requirements by System Type
| AI System Type | Transparency Obligation | Who Must Comply |
|---|---|---|
| AI systems interacting with persons (chatbots, virtual assistants) | Inform persons that they are interacting with an AI system, unless this is obvious from the circumstances | Provider and Deployer |
| Emotion recognition systems | Inform persons that they are exposed to an emotion recognition system (workplace and education use is prohibited entirely) | Deployer |
| Biometric categorization systems | Inform persons exposed to the system (categorization by sensitive attributes is prohibited) | Deployer |
| AI systems generating synthetic content (deep fakes, AI-generated text, images, audio, video) | Mark outputs as artificially generated or manipulated in a machine-readable format. Deployers must disclose that content is AI-generated | Provider (marking) and Deployer (disclosure) |
Practical Implementation
- Chatbots: Display a clear notice before or at the start of interaction: "You are interacting with an AI system" or similar. The exact wording is not prescribed, but it must be clear and timely.
- AI-generated content: Embed machine-readable metadata (such as C2PA content credentials) indicating AI generation. Display human-readable labels on AI-generated images, videos, or audio.
- Emotion recognition: Display a notice informing individuals that emotion recognition is being performed and the purpose. Obtain consent where required by GDPR.
While lighter than high-risk requirements, transparency obligations apply to a very large number of AI systems. Any organization deploying chatbots, AI-generated content, or virtual assistants must ensure disclosures are in place. Non-compliance carries penalties up to €15 million or 3% of global turnover — the same as high-risk system violations.
Minimal-Risk AI
AI systems that do not fall into the prohibited, high-risk, or limited-risk categories are classified as minimal-risk. The EU AI Act imposes no mandatory requirements on minimal-risk AI systems beyond the Article 4 AI literacy obligation that applies to all organizations.
What Qualifies as Minimal-Risk?
Examples include:
- AI-powered spam filters
- AI-enabled inventory management systems
- Recommendation engines for content or products (unless in a high-risk context)
- AI-powered search algorithms
- Predictive maintenance for non-critical equipment
- AI-powered language translation tools
- AI-enabled customer segmentation for marketing
- AI-powered code completion and development tools
Voluntary Codes of Conduct
The EU AI Act encourages (but does not mandate) providers and deployers of minimal-risk AI to voluntarily apply the requirements applicable to high-risk systems, or to adopt codes of conduct. This encouragement recognizes that even minimal-risk AI systems benefit from responsible governance practices.
Organizations with ISO 42001 certification are well-positioned for minimal-risk AI governance — the management system provides a structured approach even where regulation does not mandate one.
Classification Decision Tree
Use the following step-by-step process to classify each AI system in your inventory. Work through the questions in order — the first "yes" answer determines your classification (except where noted).
Step 1: Prohibited Practice Check
Question: Does the AI system perform any of the eight practices listed in Article 5?
- YES → PROHIBITED. Decommission immediately. No compliance pathway exists.
- NO → Continue to Step 2.
Step 2: Annex I Product Safety Check
Question: Is the AI system a safety component of a product (or is itself a product) covered by EU harmonization legislation listed in Annex I, AND does it require third-party conformity assessment under that legislation?
- YES → HIGH-RISK (Annex I). Comply with Chapter 2 requirements. Conformity assessment is integrated into the product's existing assessment. Deadline: August 2027.
- NO → Continue to Step 3.
Step 3: Annex III Domain Check
Question: Does the AI system fall into any of the eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)?
- YES → Proceed to Step 4 (Article 6(3) exception check).
- NO → Continue to Step 5.
Step 4: Article 6(3) Exception Check
Question: Does the Annex III system meet ALL of these conditions: (a) it performs only a narrow procedural task, improves a completed human activity, detects patterns without replacing human assessment, or performs a preparatory task, AND (b) it does not profile natural persons, AND (c) it does not pose a significant risk of harm to health, safety, or fundamental rights?
- YES → The system is NOT high-risk under the exception. Document the assessment, notify the relevant national authority, and proceed to Step 5. The system may still be limited-risk or minimal-risk.
- NO → HIGH-RISK (Annex III). Comply with full Chapter 2 requirements. Deadline: August 2026.
Step 5: Transparency Obligation Check
Question: Does the AI system (a) interact directly with natural persons (chatbot, virtual assistant), (b) detect emotions, (c) perform biometric categorization, or (d) generate or manipulate synthetic content (deep fakes, AI-generated text/images/audio/video)?
- YES → LIMITED-RISK. Comply with Article 50 transparency obligations.
- NO → Continue to Step 6.
Step 6: GPAI Model Check
Question: Is this a general-purpose AI model (trained on broad data at scale, capable of serving many different downstream tasks)?
- YES → GPAI model obligations under Title IIIA apply (transparency, technical documentation, copyright policy, and potentially systemic risk obligations). Note: a GPAI model integrated into an AI system does not change the AI system's classification — the model and system obligations are separate layers.
- NO → Continue to Step 7.
Step 7: Default Classification
Result: MINIMAL-RISK. No mandatory obligations beyond Article 4 AI literacy. Voluntary codes of conduct encouraged.
Common Misclassification Mistakes
Based on our experience supporting organizations through EU AI Act compliance, these are the most frequent classification errors:
Mistake 1: Classifying by Technical Complexity Instead of Use Case
Organizations often assume that simple models (logistic regression, decision trees) cannot be high-risk. This is wrong. A simple logistic regression model used for credit scoring is high-risk (Annex III, category 5b) because the use case determines classification, not the model architecture. Conversely, a highly complex deep learning model used for internal content recommendation is minimal-risk.
Mistake 2: Ignoring Embedded AI in Third-Party Products
When an organization uses a third-party SaaS platform with embedded AI features (e.g., an HR platform with AI-powered candidate screening), the deployer obligations still apply. The AI is not "invisible" to the EU AI Act just because it is embedded in a third-party tool. If the use case falls into Annex III, the deployer has obligations.
Mistake 3: Over-Relying on the Article 6(3) Exception
The Article 6(3) exception is narrower than many organizations assume. It requires the system to be purely assistive, not perform profiling, and not pose significant risk. Many AI systems that "assist human decisions" are nevertheless high-risk because they meaningfully influence the decision outcome or because they profile individuals. The exception requires documentation and notification — it is not a self-declaration.
Mistake 4: Treating "Internal Use" as an Exemption
There is no "internal use" exemption in the EU AI Act. An AI system used internally for employee management, performance evaluation, or recruitment is subject to the same classification rules as a commercially offered system. If it falls into Annex III category 4 (employment), it is high-risk regardless of whether it is developed in-house or purchased externally.
Mistake 5: Forgetting That Classification Can Change
An AI system classified as minimal-risk today can become high-risk tomorrow if its intended purpose changes. An AI recommendation engine (minimal-risk) repurposed to prioritize emergency dispatch calls (high-risk, Annex III category 5d) triggers reclassification and the full weight of Chapter 2 requirements. Organizations must reassess classification whenever there is a significant change in use case, deployment context, or scope.
Mistake 6: Confusing "Limited Risk" with "Low Compliance Burden"
Limited-risk obligations (transparency disclosures) carry the same penalty ceiling as high-risk violations — up to €15 million or 3% of global turnover. Organizations that dismiss transparency requirements as minor are taking a significant financial risk. Every chatbot, every piece of AI-generated content, and every virtual assistant needs proper disclosure.
What Changes After Classification
Once you have classified each AI system, the classification drives your entire compliance programme:
For Prohibited Systems
- Immediate decommissioning required
- Document the cessation decision and date
- Assess whether any alternative, compliant AI approach can achieve the business objective
- Update your AI system inventory
For High-Risk Systems
- Begin readiness assessment and gap analysis against Chapter 2 requirements
- Develop remediation plan with ownership and timelines
- Build or update technical documentation per Annex IV
- Implement or strengthen risk management, data governance, logging, transparency, human oversight, and cybersecurity controls
- Establish quality management system
- Plan conformity assessment (self-assessment or notified body)
- Prepare for EU database registration and CE marking
- Design post-market monitoring system
- Establish serious incident reporting procedures
For Limited-Risk Systems
- Implement transparency disclosures appropriate to the system type
- For AI-generated content: embed machine-readable markers and display human-readable labels
- For chatbots/virtual assistants: add interaction notices
- For emotion recognition: add notification mechanisms
- Document your transparency measures for audit trail purposes
For Minimal-Risk Systems
- Ensure AI literacy training covers staff interacting with these systems (Article 4)
- Consider voluntary adoption of codes of conduct or responsible AI practices
- Document the classification decision and the reasoning (for future reference if challenged)
- Monitor for changes in intended purpose that could trigger reclassification
Classification is not a one-time event. Build classification review into your AI governance lifecycle — reassess whenever an AI system changes purpose, scope, deployment context, or is substantially modified. A change in classification triggers a change in compliance obligations.
Frequently Asked Questions
How many risk tiers does the EU AI Act define?
The EU AI Act defines four risk tiers: Prohibited (unacceptable risk — banned outright), High-Risk (strict requirements under Chapter 2), Limited-Risk (transparency obligations), and Minimal-Risk (no mandatory requirements, voluntary codes of conduct encouraged). The classification determines which obligations apply to an AI system and its operators. Each tier has distinct compliance requirements and penalty structures.
What AI practices are prohibited under the EU AI Act?
Article 5 prohibits eight categories of AI practices: subliminal manipulation causing harm, exploitation of vulnerabilities of specific groups, social scoring by public authorities, predictive policing based solely on profiling, untargeted facial recognition database scraping, emotion recognition in workplaces and educational institutions, biometric categorization using sensitive attributes (race, political opinions, etc.), and real-time remote biometric identification in public spaces for law enforcement (with three narrow exceptions). These prohibitions have been enforceable since February 2025.
What makes an AI system high-risk under the EU AI Act?
An AI system is high-risk through two routes: Annex I (safety component of products covered by EU harmonization legislation such as medical devices, machinery, or vehicles) or Annex III (standalone high-risk AI in eight listed domains: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice/democratic processes). However, Annex III systems can claim the Article 6(3) exception if they perform only narrow procedural tasks and do not pose significant risk of harm.
Can an AI system's classification change over time?
Yes. Classification can change if the AI system's intended purpose is modified, if the system undergoes a substantial modification, or if the European Commission updates the Annex III categories (which it has the power to do through delegated acts). A system classified as minimal-risk could become high-risk if redeployed for an Annex III use case. Organizations must reassess classification whenever there is a significant change in use, scope, deployment context, or system functionality.
What is the Article 6(3) exception for Annex III systems?
Article 6(3) allows an AI system that falls within Annex III to avoid high-risk classification if it does not pose a significant risk of harm to health, safety, or fundamental rights. Specifically, the system must be performing a narrow procedural task, improving a previously completed human activity, detecting decision-making patterns without replacing human assessment, or performing a preparatory task. The exception does not apply if the system profiles natural persons. Providers must document the exception assessment and notify the relevant national competent authority before placing the system on the market.