Key Takeaways
  • Conformity assessment is a mandatory pre-market requirement for every high-risk AI system placed on the EU market or used within the EU
  • Two assessment paths exist: internal (self-assessment by the provider) and third-party (via a notified body) — the path depends on the AI system's use case and risk category
  • A quality management system (QMS) aligned with Article 17 is essential regardless of which path you follow
  • Successfully completing conformity assessment results in a declaration of conformity and the right to affix CE marking
  • Post-market monitoring obligations continue after the initial assessment — compliance is not a one-time event

What is Conformity Assessment?

Conformity assessment under the EU AI Act (Regulation 2024/1689) is the formal process by which an AI system provider demonstrates that a high-risk AI system meets all the requirements set out in Chapter 2 of the regulation. It is a well-established concept in EU product-safety law, already used for medical devices, machinery, and telecommunications equipment. The EU AI Act extends this approach to artificial intelligence systems for the first time.

The assessment verifies that the provider has implemented appropriate measures across several areas: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy, robustness, and cybersecurity. Upon successful completion, the provider issues a declaration of conformity and affixes the CE marking, signalling to the market and to regulators that the system complies with EU law.

Conformity assessment is not optional. Under Article 43, every provider of a high-risk AI system must complete the applicable assessment procedure before placing the system on the market or putting it into service. Failing to do so exposes the provider to enforcement action and fines of up to 3% of annual global turnover or EUR 15 million, whichever is higher.

Important Context

The EU AI Act's conformity assessment framework builds on the EU's existing "New Legislative Framework" for product regulation. If you have experience with CE marking for medical devices (MDR), machinery (Machinery Regulation), or radio equipment (RED), many of the concepts — notified bodies, declarations of conformity, technical documentation — will be familiar. However, there are AI-specific additions, especially around data governance and algorithmic transparency.

When is Conformity Assessment Required?

Conformity assessment is required whenever a provider intends to place a high-risk AI system on the EU market or put it into service within the EU. High-risk AI systems are defined in two categories:

Annex I: AI as a Safety Component

AI systems that serve as a safety component of a product already covered by EU harmonisation legislation — for example, medical devices, machinery, toys, lifts, pressure equipment, radio equipment, civil aviation, motor vehicles, and marine equipment. In these cases, the AI conformity assessment is integrated into the existing product conformity assessment procedure under the relevant sectoral legislation.

Annex III: Standalone High-Risk AI

AI systems used in sensitive areas explicitly listed in Annex III of the regulation:

  • Biometric identification and categorisation: Remote biometric identification systems, emotion recognition systems, and biometric categorisation
  • Critical infrastructure: AI used in the management and operation of critical digital infrastructure, road traffic, and supply of water, gas, heating, or electricity
  • Education and vocational training: AI determining access to education, evaluating learning outcomes, or monitoring prohibited behaviour during exams
  • Employment and worker management: AI used for recruitment, selection, work-related decisions, task allocation, or monitoring/evaluating workers
  • Access to essential services: AI assessing credit-worthiness, risk and pricing in life/health insurance, or evaluating eligibility for public assistance
  • Law enforcement: AI for risk assessment of natural persons, polygraphs, evidence reliability assessment, or crime prediction
  • Migration, asylum, and border control: AI for risk assessment, verifying travel documents, or examining asylum applications
  • Administration of justice: AI assisting judicial authorities in researching and interpreting facts and law
Exception: Annex III Systems with Narrow Scope

An Annex III AI system is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Specifically, this applies when the AI system is intended to perform a narrow procedural task, improve the result of a previously completed human activity, detect decision-making patterns without replacing human assessment, or perform a preparatory task to an assessment. However, the provider must document this reasoning.

Key Deadlines

Deadline What Applies Impact on Conformity Assessment
August 2, 2025 Prohibited AI practices banned; GPAI obligations apply Ensure your AI system does not fall into a prohibited category before starting assessment
August 2, 2026 High-risk AI requirements fully applicable (Annex III systems) Conformity assessment must be completed before market placement from this date
August 2, 2027 Annex I high-risk systems (safety components of products) AI-specific conformity assessment integrated into existing product procedures

Two Paths: Internal vs Third-Party Assessment

The EU AI Act provides two distinct conformity assessment procedures. Which path you must follow depends on your AI system's classification and use case.

Path 1: Internal Conformity Assessment (Annex VI)

The internal conformity assessment procedure allows the provider to self-assess compliance without involvement of a third party. The provider verifies that its quality management system complies with Article 17, its technical documentation meets Annex IV, and the AI system satisfies all Chapter 2 requirements. This path is available for most Annex III high-risk AI systems, provided the provider applies harmonised standards or common specifications covering the relevant requirements.

Path 2: Third-Party Assessment via Notified Body (Annex VII)

Third-party assessment requires an independent conformity assessment body (notified body) to audit both the quality management system and the technical documentation. The notified body issues a certificate and may conduct periodic audits. This path is mandatory for specific categories and optional for providers who choose external validation.

When Third-Party Assessment is Mandatory

Third-party assessment by a notified body is mandatory for:

  • High-risk AI systems intended for real-time and post remote biometric identification of natural persons (Annex III, point 1(a))
  • High-risk AI systems where the provider has not applied harmonised standards or common specifications covering the relevant requirements
  • High-risk AI systems where harmonised standards exist but the provider chooses an alternative means of compliance
Criteria Internal Assessment (Annex VI) Third-Party Assessment (Annex VII)
Who conducts it The provider (self-assessment) An independent notified body
Applicable to Most Annex III high-risk AI systems when harmonised standards are followed Biometric identification AI; systems not following harmonised standards; voluntary for other high-risk systems
QMS requirement Provider must establish and maintain a QMS per Article 17 Notified body audits the QMS and technical documentation
Technical documentation Prepared by provider; available for authority inspection Reviewed and approved by notified body
Certificate issued No external certificate; provider issues declaration of conformity Notified body issues conformity certificate (valid up to 5 years)
Ongoing obligations Internal audits, post-market monitoring, authority access Periodic surveillance audits by notified body, plus internal monitoring
Typical timeline 3-6 months (once QMS and documentation are in place) 5-10 months (including notified body engagement)
Cost considerations Internal resource investment; no external fees Notified body fees (audit, certificate, surveillance); can be significant

Step-by-Step Conformity Assessment Process

Regardless of which path applies, the conformity assessment process follows a structured sequence. Below is a comprehensive step-by-step walkthrough.

Step 1: Classify Your AI System

Before beginning any assessment activities, definitively classify your AI system:

  • Is it an AI system? Confirm it meets the definition in Article 3(1) — a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness, and that infers from inputs to generate outputs such as predictions, content, recommendations, or decisions
  • Is it high-risk? Check against Annex I (safety component of regulated products) and Annex III (standalone high-risk categories)
  • Does an exception apply? Verify whether the narrow-scope exception in Article 6(3) applies — if the AI system performs only a procedural or preparatory task, document this assessment and retain it
Practical Tip

Maintain a formal AI system register with a classification record for every AI system in scope. This register is valuable evidence during both internal and external audits, and it supports your obligations under Article 49 (EU database registration).

Step 2: Determine the Applicable Assessment Path

Based on your classification, determine whether you must follow the internal assessment path (Annex VI) or the third-party path (Annex VII). Key decision points:

  • If your system involves real-time or post remote biometric identification, you must use a notified body
  • If harmonised standards or common specifications exist and you apply them, you may use internal assessment for other Annex III systems
  • If you deviate from harmonised standards, you must engage a notified body
  • For Annex I systems, the assessment procedure is determined by the relevant sectoral legislation (e.g., MDR, Machinery Regulation)

Step 3: Establish or Align Your Quality Management System

Article 17 requires providers of high-risk AI systems to put in place a quality management system. This is a prerequisite for both internal and third-party assessment. The QMS must cover the areas detailed in the section below.

Step 4: Prepare Technical Documentation

Compile the technical documentation required under Article 11 and Annex IV. This is the core evidence package demonstrating compliance. It must cover the system description, design specifications, development process, data governance, monitoring and testing, risk management, and post-market monitoring. The documentation must be prepared before the AI system is placed on the market.

Step 5: Implement and Test Chapter 2 Requirements

Before assessment, ensure all Chapter 2 requirements are implemented and tested:

  • Article 9: Risk management system operational and documented
  • Article 10: Data governance measures in place; training, validation, and testing datasets documented
  • Article 12: Automatic logging capabilities built in
  • Article 13: Transparency measures implemented; instructions for use prepared
  • Article 14: Human oversight mechanisms designed and tested
  • Article 15: Accuracy, robustness, and cybersecurity benchmarks met

Step 6: Conduct Internal Review or Engage Notified Body

For internal assessment (Annex VI): Conduct a systematic internal review verifying that the QMS is effective, technical documentation is complete, and all Chapter 2 requirements are demonstrably met. Document findings and any corrective actions taken.

For third-party assessment (Annex VII): Select and engage a notified body designated for AI systems. The notified body will audit your QMS and review your technical documentation. Be prepared for:

  • An initial document review and QMS assessment
  • On-site audit of AI development and testing processes
  • Technical documentation review, potentially including testing or requesting additional evidence
  • Non-conformity findings requiring corrective action before certification

Step 7: Address Non-Conformities

Both paths may reveal gaps. Address all non-conformities before proceeding:

  • Implement corrective actions for each finding
  • Update documentation to reflect changes
  • Re-test affected requirements
  • For notified body audits, submit evidence of corrective actions for verification

Step 8: Issue Declaration of Conformity and Affix CE Marking

Upon successful completion, issue the EU declaration of conformity (Article 47) and affix the CE marking (Article 48). Register the AI system in the EU database (Article 49). Details on both are provided in the sections below.

Step 9: Establish Post-Market Monitoring

Implement a post-market monitoring system proportionate to the nature and risk level of the AI system. This is an ongoing obligation, not a one-time activity. Details are covered in the post-market monitoring section below.

Quality Management System Requirements

Article 17 of the EU AI Act mandates that providers of high-risk AI systems establish a quality management system. The QMS must be documented and systematic, covering the following elements:

Strategy and Compliance Approach

  • A strategy for regulatory compliance, including conformity assessment procedures and procedures for managing modifications to the AI system
  • Techniques, procedures, and systematic actions to be used for the design, development, and examination of the high-risk AI system

Design and Development Controls

  • Techniques for design, design control, and design verification
  • Development processes, including coding practices, version control, testing protocols, and validation procedures
  • Data management procedures covering data collection, labelling, storage, curation, and governance

Risk and Change Management

  • Risk management system implementation per Article 9
  • A change management process to determine whether a modification to the AI system is substantial (triggering new conformity assessment) or non-substantial
  • Post-market monitoring system and serious incident reporting procedures

Resource and Accountability Management

  • Accountability framework with clear roles and responsibilities
  • Resource management procedures ensuring adequate staffing, infrastructure, and competence
  • Record-keeping and documentation management
  • Procedures for communication with regulators and competent authorities
Leverage Existing Management Systems

If your organisation already holds ISO 42001 (AI management system), ISO 9001 (quality management), or ISO 27001 (information security) certification, much of the QMS infrastructure can be reused. ISO 42001 provides particularly strong alignment with Article 17 requirements, covering AI risk assessment, data governance, and AI system lifecycle management.

Technical Documentation

The technical documentation package (Article 11, Annex IV) is the cornerstone of conformity assessment. It must be prepared before market placement and kept up to date throughout the AI system's lifecycle. The documentation must enable competent authorities to assess compliance with Chapter 2 requirements.

What the Documentation Must Cover

  • General description: Intended purpose, provider identity, version information, hardware/software requirements, and how the AI system interacts with external systems
  • Detailed description of elements: Development methodology, design specifications, system architecture, computational resources, and third-party tools/components used
  • Data and data governance: Training, validation, and testing datasets; data collection methods; data processing operations; bias examination; and data quality measures
  • Risk management: Description of the risk management process, identified risks, risk mitigation measures, and residual risk assessment
  • Monitoring, testing, and validation: Validation and testing procedures, metrics used, test results, and performance benchmarks
  • Change management and updates: Logging of all changes, modifications, and version history
  • Post-market monitoring plan: System for collecting and analysing data on AI system performance in the field

For a comprehensive breakdown of every Annex IV requirement and practical deliverables, see our EU AI Act Technical Documentation (Article 11) guide.

Declaration of Conformity & CE Marking

EU Declaration of Conformity (Article 47)

Upon completing the conformity assessment, the provider draws up a written EU declaration of conformity. This declaration must be kept for at least 10 years after the AI system has been placed on the market. It must contain:

  • AI system name and type, plus any additional unambiguous reference allowing identification
  • Name and address of the provider (and authorised representative, if applicable)
  • A statement that the declaration is issued under the sole responsibility of the provider
  • A statement that the AI system complies with Chapter 2 of the AI Act and, where applicable, other relevant EU legislation
  • References to harmonised standards or common specifications applied
  • Where applicable, the name and identification number of the notified body, a description of the procedure performed, and identification of the certificate issued
  • Place and date of issue, name and function of the signatory, and signature

CE Marking (Article 48)

The CE marking must be affixed visibly, legibly, and indelibly to the high-risk AI system. If this is not possible due to the nature of the AI system, it must be affixed to the packaging or accompanying documentation. The CE marking must be affixed before the AI system is placed on the market. For AI systems subject to third-party assessment, the CE marking is followed by the identification number of the notified body.

EU Database Registration (Article 49)

Before placing a high-risk AI system on the market or putting it into service, the provider must register the system (and themselves) in the EU database established under Article 71. This registration includes summary information about the AI system, the conformity assessment procedure followed, and the status of the declaration of conformity.

Post-Market Monitoring

Conformity assessment is not a one-and-done exercise. Article 72 requires providers to establish and document a post-market monitoring system, proportionate to the nature and risks of the AI system. This system must:

  • Actively and systematically collect, document, and analyse data on the AI system's performance throughout its lifetime
  • Enable the provider to evaluate continuous compliance with Chapter 2 requirements
  • Be based on a post-market monitoring plan included in the technical documentation

Serious Incident Reporting (Article 73)

Providers must report serious incidents to the market surveillance authorities of the Member State where the incident occurred. A serious incident is one that directly or indirectly leads to, or is likely to lead to:

  • Death or serious damage to health
  • Serious and irreversible disruption of critical infrastructure management
  • Breach of fundamental rights obligations

The initial report must be submitted immediately after the provider establishes a causal link (or reasonable likelihood), and no later than 15 days after becoming aware of the incident.

Ongoing Compliance Activities

  • Periodic internal audits of the QMS and AI system performance
  • Monitoring for model drift, data quality degradation, and emerging risks
  • Tracking regulatory updates, new harmonised standards, and guidance from the AI Office
  • For notified body-certified systems: participating in periodic surveillance audits (typically annual)
  • Evaluating whether modifications constitute substantial changes requiring new conformity assessment

Notified Bodies Explained

Notified bodies are independent conformity assessment organisations designated by EU Member States to carry out third-party assessment of high-risk AI systems. They play a critical role in the regulatory framework.

Designation and Requirements

To be designated as a notified body under the AI Act, an organisation must:

  • Be established under national law of a Member State
  • Demonstrate independence and absence of conflicts of interest
  • Have the necessary competence, including personnel with expertise in AI technologies, data science, machine learning, cybersecurity, and the relevant application domain
  • Maintain professional secrecy and data protection measures
  • Hold appropriate professional liability insurance
  • Comply with the requirements of Article 31 of the AI Act

What to Expect from a Notified Body Audit

The notified body assessment under Annex VII typically follows this process:

  1. Application and scoping: Submit your application, AI system documentation, and QMS details. The notified body defines the audit scope.
  2. Document review: The notified body reviews the QMS documentation and technical file to verify completeness and adequacy.
  3. QMS audit: On-site or remote audit of the quality management system, including design and development processes, data management, testing procedures, and change management.
  4. Technical documentation assessment: In-depth review of the Annex IV documentation, potentially including independent testing or requesting additional evidence.
  5. Findings and corrective actions: Non-conformities are reported. The provider must implement corrective actions and provide evidence before certification.
  6. Certification decision: If satisfied, the notified body issues a conformity certificate, valid for up to 5 years, subject to periodic surveillance.
  7. Surveillance audits: Periodic audits (typically annual) to verify ongoing compliance. The notified body may also conduct unannounced audits.

Selecting a Notified Body

When selecting a notified body, consider:

  • Domain expertise: Does the body have experience with your type of AI system and application domain?
  • Accreditation scope: Confirm the body is notified specifically for AI Act assessments (check the NANDO database)
  • Capacity and timeline: Demand for notified body services may exceed supply initially — engage early
  • Geographic presence: Consider logistics for on-site audits
  • Reputation and track record: Look for bodies with experience in related sectors (e.g., medical devices, cybersecurity)

Common Mistakes to Avoid

Based on experience with conformity assessment in adjacent regulatory domains (medical devices, cybersecurity), here are the most common mistakes providers make when preparing for EU AI Act conformity assessment:

1. Treating Conformity Assessment as a Documentation Exercise

Conformity assessment requires demonstrated compliance, not just documented compliance. Having policies on paper is insufficient — you must show evidence that processes are implemented and effective. Testing results, audit logs, incident records, and training evidence are all essential.

2. Underestimating Data Governance Requirements

Article 10's data governance requirements are among the most challenging for AI providers. Simply documenting your datasets is not enough. You must demonstrate that training data is relevant, representative, and free from errors to the extent possible. You need documented bias examination processes and evidence that data quality measures are working.

3. Neglecting the Change Management Process

Many providers fail to establish a robust change management process that distinguishes between substantial and non-substantial modifications. Without this, every update to your AI system potentially triggers a full new conformity assessment — an unsustainable burden for actively maintained systems.

4. Starting Too Late

Building a QMS, compiling technical documentation, and conducting thorough testing takes months. Starting six months before the compliance deadline is often insufficient, especially for organisations without existing management systems. Begin preparation 12-18 months in advance.

5. Ignoring Post-Market Monitoring

Conformity assessment is a pre-market requirement, but ongoing compliance is just as important. Providers who treat assessment as a one-time hurdle risk non-compliance when the system's behaviour changes in production or when new risks emerge.

6. Failing to Maintain Traceability

Every requirement must be traceable to specific evidence. Maintain a compliance matrix mapping each Article and Annex IV requirement to the corresponding documentation, process, test result, or control. This traceability is invaluable during both internal reviews and notified body audits.

Frequently Asked Questions

When is third-party conformity assessment required under the EU AI Act?

Third-party conformity assessment by a notified body is mandatory for high-risk AI systems used in biometric identification (specifically, remote real-time and post biometric identification of natural persons). It is also required when the provider has not applied harmonised standards or common specifications covering the relevant requirements. All other high-risk AI systems listed in Annex III may use the internal conformity assessment procedure.

What is the difference between internal and third-party conformity assessment?

Internal conformity assessment (Annex VI) allows the provider to self-assess compliance based on their QMS and technical documentation, without external audit. Third-party assessment (Annex VII) requires an independent notified body to audit both the QMS and technical documentation, and issue a certificate. Both paths require the same substantive compliance with Chapter 2 requirements; the difference lies in who verifies that compliance.

What is required for the EU AI Act declaration of conformity?

The EU declaration of conformity (Article 47) is a written statement by the provider confirming that the AI system complies with the AI Act. It must include the provider's name and address, AI system identification, a conformity statement, references to harmonised standards used, notified body details (if applicable), and the date, place, and authorised signature. It must be kept for at least 10 years and made available to national authorities upon request.

How long does a conformity assessment take?

Timelines vary depending on organisational readiness. Internal conformity assessment typically takes 3-6 months once the QMS and technical documentation are in place. Third-party assessment via a notified body adds 2-4 months for the external audit and certification process. End-to-end preparation — including building the QMS and documentation from scratch — can take 6-12 months or longer for complex AI systems.

Do I need to repeat the conformity assessment for AI system updates?

Not for every update. Only substantial modifications trigger a new conformity assessment. A substantial modification is one that goes beyond what is foreseen in the provider's initial conformity assessment and that may affect the AI system's compliance with Chapter 2 requirements, or that changes the AI system's intended purpose. Minor updates must be documented but do not require reassessment. The provider must maintain a change management process that evaluates each modification.