Key Takeaways
  • Article 11 of the EU AI Act requires providers of high-risk AI systems to compile and maintain comprehensive technical documentation before market placement
  • Annex IV defines eight sections of documentation covering everything from general system description to post-market monitoring plans
  • Technical documentation is the primary evidence package assessed during conformity assessment — both internal and third-party
  • Documentation must be kept up to date throughout the AI system's lifecycle and retained for at least 10 years
  • Organisations with existing ISO 42001 or ISO 27001 documentation have a head start but will need to address AI-specific gaps

Why Technical Documentation Matters

Technical documentation is not a bureaucratic formality under the EU AI Act — it is the central compliance artefact that underpins the entire conformity assessment process. Whether you pursue internal assessment under Annex VI or third-party assessment via a notified body under Annex VII, the technical documentation is what authorities and auditors will examine to determine whether your high-risk AI system meets the requirements of Chapter 2.

Think of the technical documentation as the "regulatory passport" for your AI system. Without it, you cannot issue a declaration of conformity, affix CE marking, or register in the EU database. With incomplete or inadequate documentation, you risk non-conformity findings that block market access and expose your organisation to enforcement action.

Beyond regulatory compliance, thorough technical documentation delivers practical benefits:

  • Knowledge preservation: Captures design decisions, data choices, and risk assessments that might otherwise exist only in individual team members' heads
  • Change management: Provides the baseline against which to evaluate whether modifications are substantial (requiring new conformity assessment) or non-substantial
  • Incident investigation: Enables rapid root-cause analysis when issues arise in production
  • Supply chain transparency: Supports downstream deployers' obligations to understand and oversee the AI systems they use
  • Cross-regulatory alignment: Much of the documentation can satisfy requirements under ISO 42001, GDPR (for AI processing personal data), and sector-specific regulations

Article 11 & Annex IV Overview

Article 11 of the EU AI Act establishes the legal obligation: providers of high-risk AI systems must draw up technical documentation in accordance with Annex IV before the system is placed on the market or put into service. The documentation must be kept up to date and made available to national competent authorities upon request.

Annex IV then specifies the content requirements. The documentation must contain, at a minimum, the following information categories:

Annex IV Section Content Area Practical Deliverables
1 General description of the AI system System overview document, intended purpose statement, provider details, version register
2 Detailed description of elements and development process Architecture diagrams, design specifications, algorithms/models used, development methodology documentation
3 Detailed information on monitoring, functioning, and control Logging specifications, performance monitoring framework, human oversight procedures, instructions for use
4 Description of appropriateness of performance metrics Accuracy metrics and benchmarks, robustness testing results, cybersecurity assessment reports
5 Detailed description of the risk management system Risk assessment reports, risk mitigation measures, residual risk documentation, risk monitoring plan
6 Description of changes made throughout the lifecycle Change log, version history, modification impact assessments, re-assessment records
7 List of harmonised standards applied Standards reference list, statement of application, deviations documented
8 Copy of the EU declaration of conformity Signed declaration per Article 47
Commission Implementing Acts

The European Commission is empowered to adopt implementing acts providing common specifications for technical documentation content. As these are published, they may refine or extend the Annex IV requirements. Monitor the AI Office publications for updates, and be prepared to adjust your documentation structure accordingly.

Section 1: General Description of the AI System

The general description serves as the "executive summary" of your AI system. It must provide enough context for any competent authority or notified body to understand what the system does, who it is for, and how it fits into its operational environment.

Required Information

  • Intended purpose: A clear, unambiguous description of what the AI system is designed to do, including the specific conditions of use and foreseeable misuse scenarios
  • Provider identification: Legal name, address, and contact information of the provider, plus any authorised representative
  • System version: Version number, release date, and the relationship to prior versions
  • Hardware and software requirements: Infrastructure on which the AI system runs, including computing requirements, memory, storage, and network dependencies
  • Integration interfaces: How the AI system interacts with other hardware or software systems, including APIs, data inputs/outputs, and dependencies on external services
  • Product or system context: For Annex I systems (safety components), describe the product into which the AI is integrated and how the AI component functions within it

Practical Tips

Write the intended purpose statement as if drafting a contractual scope — precise, bounded, and testable. Avoid vague language like "the system assists with decision-making." Instead, specify: "The system analyses credit application data (income, employment history, credit bureau records) to generate a credit risk score between 0 and 100, which is presented to a human loan officer for final decision." The clearer the intended purpose, the easier it is to test compliance and establish boundaries for foreseeable misuse.

Section 2: Design Specifications

This section provides the technical deep-dive into how the AI system is built. It must be sufficiently detailed for a technically competent assessor to understand the system's architecture and design choices.

Required Information

  • Development methodology: The methods and tools used to develop the AI system (e.g., supervised learning, reinforcement learning, rule-based systems, hybrid approaches)
  • System architecture: High-level and detailed architectural diagrams showing components, data flows, model pipelines, pre-processing stages, and post-processing logic
  • Algorithms and models: Description of the algorithms, model types, and techniques used, including the rationale for selection. For machine learning systems, specify the model architecture (e.g., neural network topology, ensemble methods, decision trees)
  • Computational resources: Computing infrastructure used for training and inference, including GPU/TPU specifications, cloud services, and estimated energy consumption
  • Third-party components: All external tools, libraries, pre-trained models, and APIs integrated into the system, with version numbers and licence information
  • Design trade-offs: Document key design decisions and their rationale, especially where trade-offs were made between accuracy, fairness, interpretability, or efficiency

Architecture Documentation Best Practices

Use standardised diagramming (e.g., UML, C4 model, or data flow diagrams) to ensure clarity. Include both the training pipeline (data ingestion, preprocessing, model training, evaluation) and the inference pipeline (input processing, model execution, output generation, human oversight integration). Ensure diagrams are version-controlled alongside the system itself.

Section 3: Development Process

This section documents the processes and practices followed during the development of the AI system. It demonstrates that the provider followed a structured, quality-controlled development approach.

Required Information

  • Design and development procedures: How the AI system was designed, developed, and tested, including code review processes, version control practices, and quality gates
  • Pre-market testing: Testing procedures used before market placement, including unit tests, integration tests, system tests, and acceptance tests
  • Validation methodology: How the system's performance was validated against the intended purpose and against foreseeable conditions of use
  • Training procedures: For ML-based systems, describe the training process including hyperparameter selection, training duration, convergence criteria, and regularisation techniques
  • Decisions made during development: Key technical decisions documented with rationale, alternatives considered, and trade-offs accepted
Version Control is Non-Negotiable

Every aspect of the AI system — code, model weights, training configurations, test datasets, and documentation — must be under version control. The ability to reproduce any previous state of the system is essential for conformity assessment, incident investigation, and change management. Use tools like Git for code, DVC or MLflow for model artefacts, and document management systems with audit trails for policy documents.

Section 4: Data & Data Governance

Data governance documentation is one of the most demanding and critical sections. Article 10 imposes specific requirements on training, validation, and testing data, and the technical documentation must demonstrate compliance.

Required Information

  • Training data description: Detailed description of training datasets including size, sources, collection methods, annotation processes, and data characteristics
  • Validation and testing data: Separate descriptions for validation and testing datasets, including how they were selected and how independence from training data was ensured
  • Data governance practices: Documented procedures for data collection, labelling, storage, curation, access control, and deletion
  • Data quality measures: How data relevance, representativeness, accuracy, and completeness were assessed and ensured. Quantitative metrics where available
  • Bias examination: Description of methods used to identify and mitigate biases in the datasets, including demographic representativeness analysis, fairness metrics applied, and actions taken to address identified biases
  • Data processing operations: All transformations applied to the data including cleaning, normalisation, augmentation, feature engineering, and anonymisation/pseudonymisation
  • Personal data handling: If personal data is used, document the legal basis for processing, data protection impact assessment (where conducted), and measures to comply with GDPR or applicable data protection legislation

Practical Deliverables

Deliverable Purpose Recommended Format
Data catalogue / data card Describes each dataset used in training, validation, and testing Structured template (e.g., Google's Data Cards or custom format)
Data governance policy Defines organisational procedures for data management Policy document with approval and review history
Bias assessment report Documents bias examination methodology, findings, and mitigations Technical report with quantitative analysis
Data processing record Logs all transformations applied to datasets Pipeline configuration files + processing log
Data lineage diagram Visualises data flow from source to model consumption Flowchart or directed acyclic graph (DAG)
DPIA (if personal data used) Assesses and mitigates data protection risks GDPR Article 35 compliant assessment document

Section 5: Monitoring, Testing & Validation

This section demonstrates that the AI system's performance was thoroughly validated before market placement and that appropriate monitoring mechanisms are in place for ongoing operation.

Required Information

  • Performance metrics: Define the metrics used to evaluate accuracy, robustness, and other relevant performance characteristics, and explain why they are appropriate for the AI system's intended purpose
  • Testing procedures: Describe the testing methodology including test scenarios, test datasets, edge cases, stress testing, and adversarial testing
  • Test results: Report quantitative test results with confidence intervals, disaggregated by relevant subgroups where applicable (e.g., performance across different demographic groups)
  • Robustness evaluation: Evidence of testing against data perturbations, distribution shifts, adversarial inputs, and unexpected operating conditions
  • Cybersecurity assessment: Documentation of cybersecurity measures protecting the AI system against manipulation, data poisoning, model extraction, and other AI-specific threats
  • Logging capabilities: Description of automatic logging of AI system events as required by Article 12, including what is logged, retention periods, and access controls
  • Human oversight design: How the system enables human oversight per Article 14 — ability to monitor, interpret, intervene in, or override AI system decisions

Validation Against Foreseeable Conditions

One of the most important — and most frequently under-addressed — requirements is validation against foreseeable conditions of use. This means testing the AI system not only under ideal laboratory conditions, but also under the range of real-world conditions it is likely to encounter. Consider:

  • Variations in input data quality (noisy, incomplete, or corrupted inputs)
  • Edge cases specific to the application domain
  • Operational conditions (latency, concurrent users, system load)
  • Environmental factors (for systems processing sensor data: lighting, weather, angles)
  • User behaviour variations (different skill levels, unexpected interaction patterns)

Section 6: Risk Management

The risk management documentation must demonstrate compliance with Article 9 — one of the most substantive requirements of the EU AI Act for high-risk AI systems.

Required Information

  • Risk management process: Description of the risk management system established and maintained throughout the AI system's lifecycle, including the methodology used for risk identification, estimation, and evaluation
  • Risk identification: Catalogue of known and reasonably foreseeable risks to health, safety, and fundamental rights, considering both intended use and reasonably foreseeable misuse
  • Risk estimation and evaluation: Assessment of identified risks with likelihood and severity ratings, using a structured methodology
  • Risk mitigation measures: For each identified risk, document the mitigation measures implemented, whether through design changes, safeguards, or instructions and warnings to deployers
  • Testing to identify risk management measures: Evidence of testing to identify the most appropriate risk management measures, including evaluation of whether measures introduce new risks or unacceptable trade-offs
  • Residual risk assessment: After mitigation, document the residual risk and explain why it is acceptable. Where residual risk communicates information to deployers, include this in the instructions for use
Lifecycle Risk Management

The EU AI Act explicitly requires that risk management is not a one-time exercise but a continuous process. Your risk documentation must show a living system — with regular reviews, updates triggered by new information or incidents, and clear linkage between risk assessment findings and changes to the AI system or its deployment conditions.

Risk Management Deliverables

Deliverable Content Update Frequency
Risk management plan Defines methodology, scope, roles, and schedule for risk management Annually, or upon significant system changes
Risk register Lists all identified risks with likelihood, severity, mitigation, and residual risk Continuously, as new risks are identified
Risk assessment report Detailed analysis of risk assessment findings and conclusions At each major system milestone and annually
Risk mitigation evidence Test results, design change records, and validation of mitigation effectiveness Upon each mitigation implementation
Residual risk statement Summary of accepted residual risks with justification Updated with each risk assessment revision

Section 7: Changes & Updates

The AI Act requires documentation of all changes made to the AI system throughout its lifecycle. This section is critical for change management — determining whether a modification is substantial (requiring new conformity assessment) or non-substantial.

Required Information

  • Change log: A comprehensive record of every modification to the AI system since initial market placement, including the date, nature, scope, and rationale for each change
  • Modification impact assessment: For each significant change, an assessment of whether it constitutes a substantial modification per Article 43(4). A substantial modification is one that goes beyond what was foreseen in the initial technical documentation and may affect compliance with Chapter 2 requirements
  • Re-assessment records: If a substantial modification triggers a new conformity assessment, document the scope of reassessment and its outcomes
  • Version history: Traceable record linking each version of the AI system to the corresponding technical documentation version

Change Classification Framework

Establish a clear classification framework for changes:

  • Non-substantial changes: Bug fixes, minor performance improvements within pre-defined thresholds, UI changes, infrastructure updates. Document in the change log but no reassessment needed.
  • Potentially substantial changes: Model retraining with new data, algorithm modifications, changes to intended purpose scope, changes to human oversight mechanisms. Require formal impact assessment to determine classification.
  • Substantial changes: New intended purpose, significant performance characteristic changes, new data types or domains, fundamental architecture changes. Trigger new conformity assessment.

Section 8: Post-Market Monitoring

The technical documentation must include a post-market monitoring plan describing how the provider will actively monitor the AI system's performance after deployment. This links directly to Article 72 obligations.

Required Information

  • Monitoring strategy: Overall approach to post-market monitoring, including objectives, scope, and proportionality considerations
  • Data collection mechanisms: What data will be collected from the AI system in operation, how it will be collected, and how frequently
  • Performance indicators: Key performance indicators (KPIs) that will be tracked, along with thresholds that trigger investigation or corrective action
  • Feedback mechanisms: How feedback from deployers and affected persons will be collected, processed, and acted upon
  • Incident detection and reporting: Procedures for detecting, investigating, and reporting serious incidents per Article 73
  • Review and update cycle: Frequency of documentation review and the triggers that prompt ad hoc updates (e.g., incidents, regulatory changes, performance degradation)

Post-Market Monitoring Deliverables

  • Post-market monitoring plan: The formal plan included in the technical documentation
  • Monitoring dashboard or reports: Operational tools/reports tracking system performance against established KPIs
  • Incident response procedure: Documented procedure for handling serious incidents, including roles, escalation paths, and reporting timelines
  • Periodic monitoring reports: Regular summary reports documenting ongoing compliance and any actions taken

Maintaining Documentation Over Time

The most common failure mode in technical documentation is not the initial creation — it is the ongoing maintenance. The EU AI Act explicitly requires that documentation is kept up to date, and auditors will look for evidence of a living documentation process.

Documentation Management Best Practices

  • Assign document ownership: Each section of the technical documentation should have a named owner responsible for its accuracy and currency
  • Automate where possible: Use CI/CD pipelines to automatically generate or update technical documentation from code, test results, and model metadata. Tools like model cards and data cards can be integrated into your ML pipeline
  • Version control everything: Technical documentation must be version-controlled with clear traceability between document versions and system versions
  • Schedule regular reviews: Establish a minimum annual review cycle for the entire technical documentation pack, with ad hoc reviews triggered by system changes, incidents, or regulatory updates
  • Maintain a compliance matrix: Create a traceability matrix mapping each Annex IV requirement to the specific document, section, or evidence that addresses it. This is invaluable during audits

Retention Requirements

Technical documentation must be retained for at least 10 years after the AI system has been placed on the market or put into service. This includes:

  • All versions of the technical documentation
  • Change logs and modification impact assessments
  • Test results and validation reports
  • Risk management records
  • Post-market monitoring data and reports
  • Correspondence with notified bodies (if applicable)
  • The EU declaration of conformity
Practical Architecture Recommendation

Structure your technical documentation as a modular document set rather than a single monolithic file. Use a master document that indexes all components and provides the compliance traceability matrix. Individual components (risk assessment, data governance, test reports) can then be maintained independently by their respective owners while the master document ensures completeness and coherence.

Comprehensive Annex IV Requirements Mapping

The following table maps every Annex IV requirement to specific deliverables and evidence your organisation should prepare:

Annex IV Requirement Practical Deliverable Evidence / Artefact
Intended purpose description Intended purpose statement Formal document specifying exact use case, users, and boundaries
Provider identity and version System registration record Provider details, version register, release notes
Hardware/software interaction Integration specification API documentation, dependency manifest, infrastructure diagrams
Development methodology Development process document SDLC documentation, code review records, sprint/release history
System architecture and algorithms Architecture design document System diagrams, model architecture specs, algorithm descriptions
Third-party components Bill of materials (AI BOM) Component inventory with versions, licences, and provenance
Training data governance Data cards / data catalogue Dataset documentation, collection procedures, quality assessments
Bias examination Bias assessment report Fairness analysis, demographic breakdown, mitigation actions
Performance metrics Performance evaluation report Metric definitions, benchmark results, subgroup analysis
Robustness testing Robustness test report Adversarial testing results, perturbation analysis, stress tests
Cybersecurity measures Security assessment report Threat modelling, vulnerability assessment, penetration test results
Risk management system Risk management file Risk plan, register, assessment reports, mitigation evidence
Change history Change management log Change records, impact assessments, re-assessment outcomes
Harmonised standards applied Standards application statement List of standards, coverage mapping, deviations documented
Declaration of conformity Signed EU declaration Article 47-compliant declaration with all required particulars
Post-market monitoring plan PMS plan and procedures Monitoring plan, KPIs, incident procedure, reporting templates

Frequently Asked Questions

What is the technical documentation requirement under Article 11 of the EU AI Act?

Article 11 requires providers of high-risk AI systems to draw up comprehensive technical documentation before the system is placed on the market or put into service. The documentation must demonstrate compliance with all Chapter 2 requirements and provide national competent authorities and notified bodies with the information necessary to assess compliance. The specific content requirements are set out in Annex IV of the regulation, covering system description, design specifications, data governance, risk management, testing, and post-market monitoring.

What does Annex IV of the EU AI Act require?

Annex IV specifies eight categories of mandatory information: (1) general system description including intended purpose and provider details; (2) detailed description of system elements and the development process; (3) information on monitoring, functioning, and control mechanisms; (4) description of performance metrics and their appropriateness; (5) detailed description of the risk management system; (6) description of changes made throughout the lifecycle; (7) list of harmonised standards or common specifications applied; and (8) a copy of the EU declaration of conformity.

How often must the technical documentation be updated?

Technical documentation must be kept up to date continuously throughout the AI system's lifecycle. Any modification that may affect compliance with Chapter 2 requirements must be reflected in updated documentation. As a practical minimum, conduct a full documentation review at least annually. Additionally, trigger ad hoc updates upon system modifications, serious incidents, new regulatory guidance, or significant changes in the operational environment.

How long must technical documentation be retained?

Providers must retain the technical documentation for a minimum of 10 years after the high-risk AI system has been placed on the market or put into service. This includes all versions of the documentation, change records, test results, risk assessments, and the declaration of conformity. The documentation must be available to national competent authorities upon request throughout this period.

Can we use ISO 42001 documentation to satisfy Article 11 requirements?

ISO 42001 documentation provides a strong foundation, particularly in areas like system description, risk management, data governance, and lifecycle management. However, Annex IV contains specific requirements that go beyond ISO 42001, including explicit performance metric documentation, formal robustness and cybersecurity testing evidence, detailed change logs, harmonised standards application statements, and the declaration of conformity. We recommend conducting a gap analysis to identify where additional documentation is needed on top of your ISO 42001 artefacts.