Key Takeaways
  • The EU AI Act has extraterritorial reach — it applies to organizations worldwide if their AI system is placed on the EU market or its output is used in the EU
  • Four distinct operator roles exist: Provider, Deployer, Importer, and Distributor — each with specific, non-interchangeable obligations
  • A single organization can hold multiple roles simultaneously (e.g., provider and deployer) and must fulfil all corresponding obligations
  • Deployers who substantially modify a high-risk AI system or change its intended purpose become providers and inherit full provider obligations
  • Important exemptions exist for military/defence, scientific R&D, personal non-professional use, and certain open-source AI — but they are narrower than many assume

Who Does the EU AI Act Apply To?

The EU AI Act (Regulation 2024/1689) applies far more broadly than many organizations initially assume. Its scope extends beyond EU-based entities to capture any organization in the global AI value chain that has a connection to the EU market or EU-located persons.

At its core, the Act regulates the placing on the market, putting into service, and use of AI systems in the Union. But the critical nuance is where the effects occur, not just where the organization sits. A US-headquartered SaaS company whose AI-powered product is used by customers in Germany is in scope. An Indian IT outsourcing firm developing an AI system that will be deployed in France is in scope. A Chinese manufacturer whose AI-powered quality control system is embedded in products sold in the EU single market is in scope.

The Act applies to the following categories of actors:

  • Providers placing AI systems on the EU market or putting them into service in the EU — regardless of whether the provider is established in the EU or a third country
  • Deployers of AI systems that are established in the EU or located in the EU
  • Providers and deployers of AI systems established in a third country where the output produced by the system is used in the EU
  • Importers and distributors of AI systems
  • Product manufacturers who place on the market or put into service an AI system together with their product and under their own name or trademark
  • Authorized representatives of providers not established in the EU

Territorial Scope (Article 2)

Article 2 defines the territorial scope of the EU AI Act. Three key connection points trigger applicability:

Connection 1: EU Market Placement

If you place an AI system on the EU market or put it into service in the EU, the Act applies. "Placing on the market" means the first making available of an AI system on the EU market. "Putting into service" means the supply of an AI system for first use directly to the deployer or for own use for its intended purpose. This covers both commercial sale and internal deployment within the EU.

Connection 2: EU-Based Deployer

If you are a deployer of an AI system and you are established in the EU, or you are located in the EU, the Act applies to your use of the AI system. This captures EU organizations using AI tools built anywhere in the world.

Connection 3: Output Used in the EU

If you are a provider or deployer established outside the EU, but the output produced by your AI system is used in the EU, the Act applies. This is the broadest connection point — it means that a non-EU company whose AI model generates outputs (predictions, recommendations, decisions, content) consumed by persons in the EU is subject to the Act.

Practical Example: Output Used in the EU

A US-based fintech company provides a credit scoring API. EU-based banks call this API to assess loan applications from EU citizens. Even though the fintech company has no EU office, the AI system's output (credit scores) is used in the EU to make decisions about EU persons. The US company is in scope as a provider, and the EU banks are in scope as deployers.

Authorized Representative Requirement

Providers established outside the EU who place AI systems on the EU market must appoint an authorized representative in the EU before making the system available (Article 22). The authorized representative serves as the point of contact for EU authorities and must have a written mandate from the provider to act on their behalf for EU AI Act compliance purposes.

The Four Operator Roles

The EU AI Act defines four distinct operator roles. Each role carries specific obligations. Understanding which role(s) you hold is the foundational step for determining your compliance requirements.

Role Definition Key Trigger Obligation Weight
Provider Natural or legal person that develops an AI system or GPAI model, or has it developed, and places it on the market or puts it into service under their own name or trademark Develops or commissions development + markets under own name Heaviest — full upstream obligations
Deployer Natural or legal person that uses an AI system under their authority, except where the AI system is used in the course of a personal non-professional activity Uses AI system professionally under own authority Significant — downstream operational obligations
Importer Natural or legal person established in the EU that places on the market an AI system bearing the name or trademark of a natural or legal person established in a third country EU-based entity bringing non-EU AI system to market Moderate — verification and documentation duties
Distributor Natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market Makes AI system available without being provider or importer Lighter — due diligence and traceability duties

Provider Obligations

Providers carry the most extensive obligations under the EU AI Act. As the entity responsible for designing, developing, and bringing an AI system to market, the provider must ensure the system meets all applicable requirements before it reaches deployers and end users.

Obligations for High-Risk AI System Providers (Articles 16-22)

  • Compliance with Chapter 2 requirements: Ensure the AI system meets all technical requirements — risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), accuracy/robustness/cybersecurity (Art. 15)
  • Quality management system: Establish and maintain a quality management system to ensure compliance (Art. 17)
  • Technical documentation: Draw up and keep up-to-date technical documentation per Annex IV (Art. 11)
  • Conformity assessment: Undergo the applicable conformity assessment procedure before placing the system on the market (Art. 43)
  • EU Declaration of Conformity: Draw up a written EU Declaration of Conformity (Art. 47)
  • CE marking: Affix the CE marking to the AI system (Art. 48)
  • EU database registration: Register the AI system in the EU database before placing it on the market (Art. 49)
  • Post-market monitoring: Establish and document a post-market monitoring system (Art. 72)
  • Serious incident reporting: Report serious incidents to market surveillance authorities (Art. 73)
  • Corrective actions: Take corrective action when the system is not compliant, and inform distributors/deployers (Art. 20)
  • Cooperation with authorities: Cooperate with competent national authorities upon request (Art. 21)
  • Authorized representative: Appoint an EU-based authorized representative if the provider is established outside the EU (Art. 22)

Additional Provider Obligations for All AI Systems

  • AI literacy (Art. 4): Ensure staff and persons dealing with the operation and use of AI systems have sufficient AI literacy, taking into account their technical knowledge, experience, education, and context of use
  • Prohibited practices (Art. 5): Ensure no AI system engages in prohibited practices regardless of risk classification
  • Transparency obligations (Art. 50): For AI systems that interact with persons, generate content, or detect emotions — ensure appropriate disclosures

Deployer Obligations

Deployers — the organizations that use AI systems in their operations — carry substantial obligations, particularly for high-risk AI systems. The EU AI Act deliberately shifted responsibility downstream to ensure that AI systems are used responsibly, not just designed responsibly.

High-Risk AI System Deployer Obligations (Article 26)

  • Use in accordance with instructions: Use the AI system in accordance with the provider's instructions for use
  • Human oversight: Assign human oversight to natural persons who have the necessary competence, training, and authority
  • Input data relevance: Ensure that input data is relevant and sufficiently representative for the intended purpose
  • Monitoring: Monitor the AI system's operation based on the instructions for use. If there is reason to believe the system poses a risk, inform the provider or distributor and suspend use
  • Log retention: Keep logs automatically generated by the high-risk AI system for the period appropriate to its intended purpose (at least six months unless otherwise specified by law)
  • DPIA requirement: When required by EU or national law, carry out a data protection impact assessment (DPIA) using information provided by the provider
  • Transparency to affected persons: Inform natural persons that they are subject to a high-risk AI system (in certain use cases per Article 26(11))
  • Workplace information: Inform workers' representatives and affected workers that they will be subject to the AI system before putting it into service
  • Cooperation with authorities: Cooperate with competent authorities upon request

Deployer Obligations for All AI Systems

  • AI literacy (Art. 4): The AI literacy obligation applies equally to deployers — ensure relevant staff have appropriate competence
  • Prohibited practices (Art. 5): Deployers must not use AI systems in prohibited ways
  • Transparency (Art. 50): Deployers of systems that interact with persons or generate content must make appropriate disclosures
Key Deployer Responsibility

Deployers are not passive consumers of AI systems. The EU AI Act explicitly requires deployers to actively monitor AI system performance, ensure appropriate human oversight, verify that input data is suitable, and report concerns to providers. "We just use the tool our vendor gave us" is not a compliance defence.

Importer & Distributor Duties

Importer Obligations (Articles 23-24)

Importers are EU-established entities that bring AI systems from non-EU providers into the EU market. Their role is essentially gatekeeping — ensuring that non-EU AI systems meet EU AI Act requirements before entering the market.

  • Verification before placement: Verify that the provider has carried out the appropriate conformity assessment procedure
  • Documentation check: Ensure the provider has drawn up the technical documentation per Annex IV
  • CE marking and Declaration: Verify the AI system bears CE marking and is accompanied by the EU Declaration of Conformity
  • Authorized representative: Verify the provider has appointed an authorized representative in the EU (Article 22)
  • Identification: Indicate their name, registered trade name or trademark, and contact address on the AI system or its packaging or documentation
  • Storage and transport: Ensure that storage or transport conditions do not jeopardize the AI system's compliance
  • Non-compliance action: If the importer has reason to believe the AI system is not in conformity, not place it on the market until conformity is ensured, and inform the provider and market surveillance authorities
  • Record retention: Keep a copy of the EU Declaration of Conformity and make the technical documentation available to authorities
  • Cooperation: Cooperate with competent national authorities upon request

Distributor Obligations (Articles 25)

Distributors are entities in the supply chain (other than providers and importers) that make AI systems available on the EU market. Their obligations are lighter but still meaningful:

  • Due diligence: Verify that the high-risk AI system bears CE marking, is accompanied by required documentation, and that the provider and importer have complied with their obligations
  • Storage and transport: Ensure conditions do not jeopardise compliance
  • Non-compliance action: If the distributor considers or has reason to believe the system is not in conformity, not make it available until conformity is ensured
  • Corrective action support: Cooperate with providers, importers, and authorities on corrective actions including recalls
  • Information provision: Provide competent authorities with information and documentation as requested

When Roles Overlap

In practice, organizations frequently hold more than one operator role. Understanding role overlap is critical because you must satisfy the obligations of every role you hold simultaneously.

Common Overlap Scenarios

Scenario Roles Held Compliance Implication
Company develops AI system for internal use Provider + Deployer Must satisfy both provider obligations (design, documentation, conformity) and deployer obligations (human oversight, monitoring, log retention)
EU company buys AI system from US vendor and uses it Importer + Deployer Must verify provider compliance (importer duty) and operate the system compliantly (deployer duty)
SaaS provider offers AI-powered features to EU customers Provider (possibly Deployer if processing EU data) Provider obligations for the AI system; deployer obligations if the provider also uses the system's output
System integrator customizes and resells AI system Potentially Provider (if substantial modification) Substantial modification triggers reclassification as provider with full provider obligations
EU reseller distributes non-EU AI systems to customers Importer + Distributor Must perform importer verification plus distributor due diligence

The "Becoming a Provider" Trigger (Article 25)

A deployer, distributor, importer, or other third party is considered a provider of a high-risk AI system and must comply with full provider obligations if they:

  • Put their name or trademark on a high-risk AI system already placed on the market
  • Make a substantial modification to a high-risk AI system already placed on the market
  • Modify the intended purpose of an AI system (including a general-purpose AI system) that has not been classified as high-risk, in a way that makes it high-risk

A "substantial modification" means a change to the AI system after its placing on the market or putting into service that affects the system's compliance with the applicable requirements or results in a modification to the intended purpose for which the AI system has been assessed.

Watch Out: Fine-Tuning Can Trigger Provider Status

If a deployer fine-tunes or retrains a high-risk AI model to the extent that it constitutes a substantial modification, they become a provider for that system. This is particularly relevant for organizations customizing foundation models or large language models for high-risk use cases. Document every modification to AI systems and assess whether it crosses the "substantial" threshold.

Exemptions & Exclusions

The EU AI Act contains several important exemptions. However, organizations frequently overestimate the breadth of these exemptions. Each one has specific conditions and boundaries.

Military and Defence (Article 2(3))

The EU AI Act does not apply to AI systems placed on the market, put into service, or used exclusively for military, defence, or national security purposes, regardless of the type of entity carrying out those activities. This is an absolute exemption for genuinely military applications, but it does not extend to dual-use AI systems that serve both military and civilian purposes.

Scientific Research and Development (Article 2(6))

AI systems and models specifically developed and put into service for the sole purpose of scientific research and development are not covered. This exemption applies to genuine research activities — experimental work to acquire new knowledge or test hypotheses. It does not cover:

  • Pre-commercial development of market-bound AI systems
  • Applied research intended for commercial deployment
  • Testing and piloting of systems intended for market placement

Personal Non-Professional Use (Article 2(10))

The Act does not apply to natural persons using AI systems in the course of a purely personal non-professional activity. This means consumers using AI tools for personal purposes are not regulated as deployers. However, the provider of that AI system is still subject to the Act.

Public Authorities in Third Countries (Article 2(4))

Public authorities of third countries and international organizations are excluded when they use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or its Member States, provided that adequate safeguards for fundamental rights are in place.

Open-Source Exemptions

The EU AI Act provides a partial exemption for open-source AI:

  • GPAI models released under open-source licenses: Exempt from most GPAI model obligations — but must still comply with transparency requirements (model card, training content summary) and copyright rules. If the model is classified as posing systemic risk, the exemption does not apply.
  • High-risk AI systems using open-source components: NOT exempt. If an AI system is classified as high-risk, using open-source models or code within it does not reduce the provider's obligations. Full Chapter 2 requirements apply.
  • Free and open-source AI components: Providers of free and open-source AI components (other than GPAI models) placed on the market are exempt from requirements other than transparency/information requirements — unless the component is integrated into a high-risk system.

Other Exclusions

  • AI systems released under free and open-source licences (non-GPAI, non-high-risk): Largely exempt, unless placed on market by a commercial entity under their name
  • AI within scope of other EU product legislation: Where AI is a component of a product governed by existing EU harmonization legislation (e.g., medical devices, machinery), the AI Act requirements are integrated into the existing conformity assessment framework for that product

Practical Role Determination

Determining your organization's role(s) under the EU AI Act requires a systematic assessment. Use the following process for each AI system in your inventory:

Step 1: Identify Your Relationship to the AI System

For each AI system, answer these questions:

  1. Did you develop the AI system (or have it developed) and place it on the market or put it into service under your name or trademark? → Provider
  2. Do you use the AI system under your own authority in a professional context? → Deployer
  3. Are you an EU-established entity bringing a non-EU provider's AI system into the EU market? → Importer
  4. Do you make the AI system available on the EU market without being the provider or importer? → Distributor

Step 2: Check for Role Escalation

Even if your initial role is deployer, distributor, or importer, check whether any of these conditions promote you to provider:

  • Have you put your name or trademark on the AI system?
  • Have you made a substantial modification to the AI system?
  • Have you changed the intended purpose of the AI system?

Step 3: Assess Territorial Connection

  • Is the AI system placed on the EU market or put into service in the EU?
  • Are you established in the EU?
  • Is the output of the AI system used in the EU?

If any of these are yes, you are in scope.

Step 4: Check Exemptions

  • Is this exclusively for military/defence/national security?
  • Is this solely for scientific research and development with no commercial intent?
  • Is this personal non-professional use?
  • Does the open-source exemption apply?

If any exemption applies, document the basis for claiming it — authorities may request this evidence.

Step 5: Document the Determination

For each AI system, record:

  • Your operator role(s)
  • The territorial connection triggering applicability
  • Any exemptions claimed (with rationale)
  • The risk classification of the AI system
  • The specific obligations that apply to your role(s) and risk tier
Documentation Is Your Defence

Even if you conclude that the EU AI Act does not apply to a particular AI system, document why. A "not in scope" determination backed by documented analysis demonstrates good governance and protects you if authorities question your assessment. This is especially important for borderline cases — such as AI systems that might or might not qualify as "high-risk" under Annex III.

Frequently Asked Questions

Does the EU AI Act apply to companies outside the EU?

Yes. The EU AI Act applies extraterritorially. If you are a provider placing an AI system on the EU market, or the output of your AI system is used by persons in the EU, the Act applies regardless of where your organization is established. Providers outside the EU must also appoint an authorized representative in the EU before making a system available on the EU market. This extraterritorial scope is modelled on the GDPR and ensures that non-EU entities cannot circumvent the regulation simply by being located outside Europe.

What is the difference between a provider and a deployer under the EU AI Act?

A provider develops an AI system (or has it developed) and places it on the market or puts it into service under their own name or trademark. A deployer uses an AI system under their authority in a professional capacity. The key distinction is upstream versus downstream: providers are responsible for design, development, conformity assessment, and documentation, while deployers are responsible for proper use, human oversight, monitoring, and ensuring input data is appropriate. Both roles carry significant obligations for high-risk AI systems.

Can an organization be both a provider and a deployer?

Yes, and it is very common. An organization that develops AI systems for internal operational use is both a provider (it developed the system) and a deployer (it uses the system). Similarly, a deployer becomes a provider if they substantially modify a high-risk AI system, put their name or trademark on it, or change its intended purpose in a way that triggers high-risk classification. When holding dual roles, the organization must fulfil the obligations of both roles simultaneously.

Is open-source AI exempt from the EU AI Act?

Partially, with important nuances. General-purpose AI models released under free and open-source licenses are exempt from most GPAI model obligations (but must still publish a training data summary and comply with copyright law). However, if the GPAI model is classified as posing systemic risk, the exemption does not apply. Critically, open-source AI systems classified as high-risk are NOT exempt from Chapter 2 high-risk requirements. Using open-source components in a high-risk system does not reduce the provider's obligations.

Are research and development AI systems covered by the EU AI Act?

AI systems and models specifically developed and put into service for the sole purpose of scientific research and development are excluded from the Act. This exemption covers genuine research activities — work to acquire new knowledge or test hypotheses. It does not cover pre-commercial development, applied research with commercial intent, product testing, or pilot deployments. Once an AI system moves from R&D to production deployment or market placement, it falls within scope. Organizations should clearly delineate research-phase AI from production-phase AI.