In This Guide
- A structured readiness assessment is the critical first step toward EU AI Act compliance - it turns regulatory ambiguity into a concrete action plan
- The five-phase approach (Inventory → Classification → Gap Analysis → Remediation → Evidence Pack) provides a repeatable, auditable methodology
- High-risk AI systems face the most demanding requirements: risk management, data governance, technical documentation, transparency, human oversight, and cybersecurity
- Starting with an AI system inventory prevents the most common compliance failure - not knowing what AI you actually operate
- Organizations with ISO 42001 certification can reuse 60-70% of their existing documentation for the EU AI Act evidence pack
Overview
The EU Artificial Intelligence Act (Regulation 2024/1689) is the world's first comprehensive legal framework for AI. With prohibited practices already enforceable since February 2025, general-purpose AI obligations in effect from August 2025, and high-risk system requirements due by August 2026, organizations need a systematic readiness assessment - not a last-minute scramble.
This guide walks you through a five-phase readiness assessment methodology that takes you from "where do we even start?" to a fully documented evidence pack ready for conformity assessment. Whether you are a provider placing AI systems on the EU market, a deployer using high-risk AI in your operations, or an importer bringing non-EU AI products into Europe, the process is the same: understand your AI estate, classify it, find the gaps, fix them, and prove it.
We have structured this guide around the practical reality of compliance programmes we have supported across technology companies, financial institutions, healthcare organizations, and manufacturing firms. The methodology works for organizations with a single high-risk AI system as well as enterprises running hundreds of AI models across multiple business units.
Why It Matters
The EU AI Act introduces the most consequential AI regulation globally. Non-compliance is not merely a reputational risk - it carries concrete financial and operational consequences:
- Penalties up to €35 million or 7% of global annual turnover for prohibited AI practice violations
- Penalties up to €15 million or 3% of turnover for high-risk AI system requirement breaches
- Market access at stake: Non-compliant high-risk AI systems cannot be placed on the EU market or put into service
- Extraterritorial reach: The EU AI Act applies to any organization whose AI system output is used in the EU, regardless of where the organization is established
- Supply chain pressure: EU-based deployers will increasingly require compliance evidence from their AI system providers
A readiness assessment performed now - well ahead of the August 2026 high-risk deadline - gives your organization the time to remediate gaps methodically rather than reactively. Organizations that wait until mid-2026 will find that qualified auditors, notified bodies, and compliance consultants are fully booked.
Already in force: Prohibited AI practices (Feb 2025), AI literacy obligation and GPAI rules (Aug 2025).
Coming next: High-risk AI system requirements (Aug 2026), Annex I systems (Aug 2027).
Your readiness assessment should begin at least 12 months before your applicable deadline.
Phase 1: AI System Inventory
You cannot classify what you cannot see. The first phase establishes a complete, centralized inventory of every AI system your organization develops, deploys, imports, or distributes. This inventory becomes the foundation for every subsequent compliance decision.
What Qualifies as an "AI System"?
The EU AI Act defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (Article 3(1)).
This definition is deliberately broad. It captures:
- Machine learning models (supervised, unsupervised, reinforcement learning)
- Deep learning and neural networks
- Natural language processing systems including chatbots and LLM-powered tools
- Computer vision systems
- Recommendation engines
- Automated decision-making systems
- Robotic process automation with adaptive/learning components
- Generative AI systems
Inventory Data Points
For each AI system, capture the following information:
| Category | Data Points to Capture |
|---|---|
| Identity | System name, unique identifier, version, business owner, technical owner |
| Purpose | Intended purpose, use cases, business justification, deployment context |
| Technical Profile | Model type/architecture, training data sources, input types, output types, autonomy level |
| Stakeholders | Affected persons, deployers, downstream users, geographic deployment scope |
| Your Role | Provider, deployer, importer, or distributor (per EU AI Act definitions) |
| Lifecycle Stage | Development, testing, deployed, decommissioned, planned |
| Third-Party Dependencies | External models, APIs, foundation models, open-source components used |
Practical Tips for Inventory
- Cast a wide net: Survey every department - AI often exists in marketing (recommendation engines), HR (screening tools), finance (fraud detection), and operations (predictive maintenance) without central IT awareness
- Include third-party AI: SaaS platforms with embedded AI features count as AI systems under the Act if you are the deployer
- Check procurement records: Review software procurement for AI-powered tools purchased in the last 3 years
- Interview business unit leaders: Technical inventories miss shadow AI; business leaders know what their teams actually use
- Document "not AI" decisions: If you evaluate a system and determine it does not meet the AI system definition, record why - this demonstrates due diligence
Phase 2: Risk Classification
With your inventory complete, classify each AI system against the EU AI Act's four risk tiers. This classification determines which obligations apply to each system and drives the scope of your gap analysis.
The Four Risk Tiers
- Prohibited (Unacceptable Risk): AI practices banned under Article 5, including social scoring, subliminal manipulation, exploitation of vulnerabilities, real-time remote biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, untargeted facial recognition database scraping, and biometric categorization by sensitive attributes
- High-Risk: AI systems in Annex I (safety components of regulated products like medical devices, machinery, vehicles) and Annex III (standalone high-risk AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice)
- Limited Risk: AI systems with transparency obligations - chatbots, emotion recognition systems, deep fakes, and AI-generated content must disclose their AI nature to users
- Minimal Risk: All other AI systems - no mandatory requirements, but voluntary codes of conduct are encouraged
Effort by Risk Category
| Risk Tier | Compliance Effort | Key Obligations | Typical Timeline |
|---|---|---|---|
| Prohibited | Immediate action | Cease use, decommission, document decision | Immediate (already enforceable) |
| High-Risk | Very High | Full Chapter 2 requirements: risk management, data governance, documentation, logging, transparency, human oversight, accuracy, robustness, cybersecurity, conformity assessment, CE marking, EU database registration, post-market monitoring | 6-18 months |
| Limited Risk | Moderate | Transparency disclosures, labelling AI-generated content | 2-4 months |
| Minimal Risk | Low | AI literacy (Article 4), voluntary codes of conduct | 1-2 months |
Classification Decision Process
For each AI system in your inventory, work through this decision sequence:
- Prohibited check: Does the system perform any Article 5 practice? If yes, it must be stopped immediately
- Annex I check: Is the system a safety component of, or itself, a product covered by EU harmonization legislation listed in Annex I? If yes, it is high-risk
- Annex III check: Does the system fall into one of the eight Annex III categories? If yes, it is high-risk (subject to the Article 6(3) exception where the system does not pose a significant risk of harm)
- GPAI check: Is this a general-purpose AI model? If yes, additional GPAI-specific obligations apply under Title IIIA
- Transparency check: Does the system interact directly with persons, detect emotions, generate or manipulate content? If yes, transparency obligations apply
- Default: If none of the above apply, the system is minimal-risk
Many organizations classify AI systems as "minimal risk" by default without properly assessing Annex III applicability. An HR screening tool that filters job applicants is high-risk under Annex III category 4 (employment). A credit scoring model is high-risk under category 5 (essential services). Review Annex III categories carefully before classifying any system as minimal risk.
Phase 3: Gap Analysis
The gap analysis is the core of your readiness assessment. For each AI system classified as high-risk (or limited-risk, as applicable), you systematically compare your current controls and documentation against the specific EU AI Act obligations.
Gap Analysis Framework for High-Risk AI
High-risk AI systems must comply with Chapter 2 of the EU AI Act (Articles 8-15) plus additional provider obligations (Articles 16-22). Structure your gap analysis around these requirement areas:
Article 9 - Risk Management System
Assess whether you have:
- A documented risk management system operating throughout the AI system lifecycle
- Processes to identify and analyze known and reasonably foreseeable risks
- Risk estimation and evaluation procedures with defined risk acceptance criteria
- Risk mitigation measures that address residual risks
- Testing procedures to identify the most appropriate risk management measures
- Consideration of risks from intended use and reasonably foreseeable misuse
Article 10 - Data and Data Governance
Assess whether you have:
- Data governance practices for training, validation, and testing datasets
- Data quality criteria defined and enforced (relevance, representativeness, accuracy, completeness)
- Bias examination and mitigation processes for datasets
- Data preparation, labelling, and cleaning procedures documented
- Assessment of data availability, quantity, and suitability
Article 11 - Technical Documentation
Assess whether you have:
- Comprehensive technical documentation prepared before market placement
- Documentation covering all elements specified in Annex IV
- Processes to keep documentation current throughout the lifecycle
- Documentation accessible to competent authorities upon request
Article 12 - Record-Keeping (Logging)
Assess whether you have:
- Automatic logging capabilities built into the AI system
- Logs that enable traceability of system functioning
- Log retention for appropriate periods
- Logs accessible to deployers and authorities as required
Article 13 - Transparency and Provision of Information
Assess whether you have:
- System designed for sufficient transparency to enable deployer interpretation of outputs
- Instructions for use covering capabilities, limitations, and risks
- Information on intended purpose, accuracy levels, and known limitations
- Human oversight measures described
Article 14 - Human Oversight
Assess whether you have:
- Human oversight measures designed into the AI system
- Humans able to fully understand the system's capabilities and limitations
- Ability for humans to correctly interpret AI system outputs
- Override or stop functionality available to human overseers
- Measures to prevent "automation bias" in human oversight
Article 15 - Accuracy, Robustness, and Cybersecurity
Assess whether you have:
- Declared and documented accuracy levels with metrics
- Robustness testing against errors, faults, and inconsistencies
- Cybersecurity measures protecting against adversarial attacks (data poisoning, model evasion, model inversion)
- Technical redundancy solutions where appropriate
Gap Scoring
For each requirement area, score the gap using a consistent scale:
| Score | Status | Description |
|---|---|---|
| 0 | Not Addressed | No controls, processes, or documentation in place |
| 1 | Initial / Ad Hoc | Some awareness, ad hoc activities, not documented |
| 2 | Developing | Processes exist but are incomplete or inconsistently applied |
| 3 | Defined | Documented processes in place, evidence available, but not yet fully validated |
| 4 | Managed | Fully operational with monitoring, evidence, and review cycles |
| 5 | Optimized | Continuously improved, externally validated, fully aligned with EU AI Act requirements |
A score of 3 or below in any area indicates a gap requiring remediation before conformity assessment.
Phase 4: Remediation Plan
The gap analysis produces a list of findings. The remediation plan turns those findings into an actionable project with clear ownership, timelines, and milestones tied to EU AI Act enforcement dates.
Structuring the Remediation Plan
For each gap identified, document:
- Gap description: What is missing or insufficient, with reference to the specific EU AI Act article
- Current state: What exists today (gap score from Phase 3)
- Target state: What "compliant" looks like for this requirement
- Remediation actions: Specific tasks to close the gap
- Owner: Named individual responsible for delivery
- Resources needed: Budget, tooling, external support, training
- Priority: Critical, High, Medium, Low (based on risk and deadline)
- Target completion date: Aligned to applicable EU AI Act deadline
- Evidence of closure: How you will demonstrate the gap is closed
Prioritization Logic
Not all gaps are equal. Prioritize based on:
- Prohibited practices: Immediate priority - these are already enforceable. Any prohibited AI practice must be ceased immediately
- AI literacy (Article 4): Already enforceable since August 2025 - ensure staff competence programmes are in place
- High-risk system gaps scoring 0-1: These represent the largest remediation effort and should start first
- Technical documentation (Annex IV): This is typically the most time-consuming deliverable and should be started early
- Conformity assessment preparation: Engage with notified bodies early, as demand will outstrip supply near the deadline
- Post-market monitoring: This requires operational processes and tooling that take time to implement
For a typical organization with 3-5 high-risk AI systems, expect to allocate: 1-2 dedicated compliance staff, 0.5 FTE per AI system from the technical team, legal/regulatory counsel involvement, and potentially external support for conformity assessment preparation. Budget €100,000-€300,000 for a full compliance programme including external audit costs.
Phase 5: Evidence Pack Assembly
The evidence pack is the tangible output of your compliance programme. It compiles all documentation, records, and artifacts that demonstrate your AI system meets EU AI Act requirements. For high-risk systems, this evidence pack forms the basis for conformity assessment.
Evidence Pack Contents
A complete evidence pack for a high-risk AI system includes:
1. Technical Documentation (Annex IV)
- General description of the AI system (intended purpose, versions, hardware/software prerequisites)
- Detailed description of system elements and development process
- Information on training, validation, and testing data (methodology, data characteristics, preparation, bias assessment)
- Design specifications (model logic, computational resources, design choices, validation approaches)
- Description of system capabilities and limitations (accuracy levels, foreseeable unintended outcomes, potential risks)
- Human oversight measures (technical measures, human oversight procedures)
- Information on changes and updates made post-deployment
2. Risk Management Records
- Risk management plan and methodology
- Risk identification register (known and foreseeable risks)
- Risk assessment results with scoring rationale
- Risk treatment plan with mitigation measures
- Residual risk assessment and acceptance decisions
- Testing results demonstrating risk mitigation effectiveness
3. Data Governance Records
- Data governance policy
- Training data documentation (sources, characteristics, quality assessment)
- Validation and testing data documentation
- Bias assessment report and mitigation actions
- Data preparation and labelling procedures
4. Testing and Validation Evidence
- Test plans and strategies
- Accuracy metrics and benchmark results
- Robustness testing results (adversarial testing, edge cases, stress tests)
- Cybersecurity assessment results
- Performance testing across different operating conditions
5. Operational Documentation
- Instructions for use (deployer-facing documentation)
- Human oversight procedures and training materials
- Post-market monitoring plan
- Incident reporting procedures
- Change management procedures
- Quality management system documentation
6. Compliance Declarations
- EU Declaration of Conformity
- CE marking documentation
- EU database registration records
- Notified body correspondence and assessment records (where applicable)
High-Risk Evidence Matrix
Use this matrix to track evidence completeness for each high-risk AI system:
| EU AI Act Requirement | Evidence Required | Typical Format | Owner |
|---|---|---|---|
| Art. 9 Risk Management | Risk management plan, risk register, treatment plan, residual risk assessment | Policy + Register + Assessment Reports | Risk / Compliance |
| Art. 10 Data Governance | Data governance policy, dataset documentation, bias assessment, quality metrics | Policy + Data Cards + Bias Report | Data Engineering / ML Ops |
| Art. 11 Technical Documentation | Annex IV documentation package | Technical Document (50-200 pages typical) | Engineering / Product |
| Art. 12 Record-Keeping | Logging architecture, log samples, retention policy | Architecture Docs + Log Samples + Policy | Engineering / DevOps |
| Art. 13 Transparency | Instructions for use, capability/limitation disclosure | User Documentation + Disclosure Notices | Product / Legal |
| Art. 14 Human Oversight | Oversight procedures, training records, override mechanisms | SOP + Training Records + System Config | Operations / Product |
| Art. 15 Accuracy/Robustness | Accuracy benchmarks, robustness test results, cybersecurity assessment | Test Reports + Pen Test / Security Assessment | QA / Security |
| Art. 17 Quality Management | QMS documentation, internal audit records, corrective actions | QMS Manual + Audit Reports + CAPAs | Quality / Compliance |
| Art. 72 Post-Market Monitoring | Monitoring plan, monitoring reports, incident log | Monitoring Plan + Dashboards + Incident Log | Operations / ML Ops |
Leveraging ISO 42001
If your organization has implemented or is pursuing ISO 42001 (AI Management System), you have a significant head start. ISO 42001 provides structured coverage of many EU AI Act requirements, reducing your readiness assessment effort substantially.
Where ISO 42001 Directly Supports EU AI Act Compliance
- Risk Management (Art. 9): ISO 42001 Clause 6.1.2 and Clause 8.2 require AI risk assessment and treatment processes - reusable with minimal modification
- Data Governance (Art. 10): Annex A.6 addresses data quality, provenance, and preparation - strong alignment with Article 10
- Documentation (Art. 11): Annex A.5.8 and A.7 require comprehensive AI system documentation - substantial overlap with Annex IV requirements
- Transparency (Art. 13): Annex A.7.3 covers information provision for AI systems - directly supports transparency obligations
- Human Oversight (Art. 14): Annex A.8 addresses AI system use including human control measures
- Competence / AI Literacy (Art. 4): Clause 7.2 and 7.3 cover competence and awareness requirements
Gaps Between ISO 42001 and EU AI Act
ISO 42001 does not cover the following EU AI Act-specific requirements:
- Conformity assessment procedures and CE marking
- EU database registration
- Specific prohibited practice rules
- Post-market surveillance reporting to national authorities
- Serious incident reporting within mandated timeframes
- GPAI model-specific obligations (transparency, copyright, systemic risk)
- Specific instructions-for-use content requirements
Our experience shows that organizations with ISO 42001 certification can reuse approximately 60-70% of their existing documentation and processes for the EU AI Act evidence pack. The remaining 30-40% requires new work focused on regulatory-specific requirements.
Common Pitfalls
Having supported numerous EU AI Act readiness programmes, we consistently observe these mistakes:
1. Incomplete AI System Inventory
Organizations inventory their core AI systems but miss embedded AI in third-party SaaS tools, open-source components with ML capabilities, and shadow AI projects in business units. A deployer using a third-party AI-powered HR screening tool has EU AI Act obligations - the tool being external does not exempt you.
2. Defaulting to "Minimal Risk" Classification
Without rigorous Annex III assessment, organizations underclassify their AI systems. An AI system that assists in creditworthiness assessment is high-risk (Annex III, category 5b) even if it is a simple logistic regression model. Classification depends on the use case, not the technical sophistication.
3. Treating Compliance as a One-Time Exercise
The EU AI Act requires ongoing compliance - post-market monitoring, periodic reviews, incident reporting, and documentation updates. Organizations that treat their evidence pack as a one-time deliverable will fall out of compliance rapidly.
4. Ignoring the Deployer Obligations
Many organizations focus exclusively on provider obligations. However, deployers of high-risk AI have their own significant obligations under Article 26, including: using systems in accordance with instructions, ensuring human oversight, monitoring operations, retaining auto-generated logs, and conducting data protection impact assessments where required.
5. Late Engagement with Notified Bodies
For high-risk AI systems requiring third-party conformity assessment (certain biometric identification systems under Annex III, point 1), early engagement with notified bodies is critical. The number of designated notified bodies for AI is still limited, and demand will surge as the August 2026 deadline approaches.
6. Neglecting the Supply Chain
If you use third-party AI models or components in your high-risk AI system, you need documented evidence of their compliance characteristics. Establish supply chain due diligence processes and contractual requirements early.
Frequently Asked Questions
How long does an EU AI Act readiness assessment take?
A thorough readiness assessment typically takes 8-12 weeks depending on the number and complexity of AI systems. Phase 1 (inventory) takes 2-3 weeks, risk classification 1-2 weeks, gap analysis 2-3 weeks, and remediation planning 2-4 weeks. Organizations with existing ISO 42001 or similar frameworks can compress the timeline significantly. Enterprise organizations with large AI portfolios may require 16+ weeks.
What is included in an EU AI Act evidence pack?
A complete evidence pack for a high-risk AI system includes: technical documentation per Annex IV, risk management system records, data governance policies and quality assessments, testing and validation results, human oversight procedures, post-market monitoring plans, quality management system documentation, conformity declaration, and CE marking documentation. The typical evidence pack for a single high-risk system runs 200-500 pages.
Do I need a readiness assessment if my AI systems are minimal-risk?
Yes. Even minimal-risk AI systems should undergo classification verification to confirm they are truly minimal-risk. Misclassification carries significant penalties - up to €15 million or 3% of global turnover. Additionally, the Article 4 AI literacy obligation applies to all organizations regardless of risk tier, and Article 50 transparency obligations may apply even to systems that are not high-risk.
Can ISO 42001 replace an EU AI Act readiness assessment?
ISO 42001 provides a strong foundation but does not replace a readiness assessment. The EU AI Act has specific regulatory requirements - CE marking, EU database registration, conformity assessment, post-market surveillance, serious incident reporting - that go beyond ISO 42001's scope. However, ISO 42001 implementation covers many obligations and significantly reduces the gap analysis effort, typically by 60-70%.
What happens if we fail the EU AI Act conformity assessment?
Failing a conformity assessment means you cannot place the high-risk AI system on the EU market or put it into service until nonconformities are addressed. There is no formal "fail" - the assessing body issues findings that must be remediated, after which reassessment occurs. The financial risk of non-compliance is substantial: penalties range up to €35 million or 7% of global annual turnover for the most serious violations.