In This Article
- The EU AI Act uses a phased timeline spanning August 2024 to August 2027 — not everything takes effect at once
- Prohibited AI practices and AI literacy obligations have been enforceable since February 2, 2025
- GPAI model obligations (including for large language models) apply from August 2, 2025
- Most high-risk AI requirements and all financial penalties apply from August 2, 2026
- AI embedded in regulated products (medical devices, machinery, etc.) has until August 2, 2027 for full compliance
The Phased Approach
The EU AI Act does not switch on overnight. Recognizing that organizations, standards bodies, and regulators all need time to prepare, the European legislators designed a staggered implementation schedule. Each phase activates a distinct set of obligations, targeting different actors and AI system categories.
This phased approach is deliberate: it addresses the most immediately harmful AI practices first (prohibited systems), then moves to foundational AI models (GPAI), then to the bulk of high-risk obligations, and finally to AI systems embedded within products already subject to EU safety legislation.
For compliance teams, this means that the question is not simply "are we compliant?" but rather "are we compliant with the obligations that apply right now, and are we on track for the next deadline?" This article provides a milestone-by-milestone breakdown so you can answer both questions with confidence.
Master Timeline at a Glance
| Date | Milestone | Primary Actors Affected | Status |
|---|---|---|---|
| Aug 1, 2024 | Entry into force | All (awareness) | Active |
| Feb 2, 2025 | Prohibited practices + AI literacy | All providers & deployers | Active |
| Aug 2, 2025 | GPAI obligations + governance | GPAI providers, national authorities | Upcoming |
| Aug 2, 2026 | High-risk (Annex III) + penalties | High-risk providers & deployers | Future |
| Aug 2, 2027 | Regulated products (Annex I) + full enforcement | Product manufacturers using AI | Future |
August 1, 2024: Entry into Force
The EU AI Act was published in the Official Journal of the European Union on July 12, 2024. Following the standard 20-day period, the regulation entered into force on August 1, 2024.
What Happened at This Stage
- Legal existence: The regulation became part of EU law, though most obligations did not yet apply. This date started the clock on all subsequent deadlines.
- AI Office establishment: The European Commission began establishing the AI Office within its structure — the central body responsible for GPAI model oversight and coordinating enforcement across member states.
- Standards development: European standardization organizations (CEN, CENELEC) accelerated work on harmonized standards that will provide presumption of conformity for high-risk AI systems.
- National preparations: EU member states began designating national competent authorities and market surveillance authorities for AI Act enforcement.
What Organizations Should Have Done
This phase was primarily about awareness and planning. Organizations that used this period effectively began inventorying their AI systems, conducting preliminary risk classifications, and establishing internal governance structures. Those that waited lost valuable preparation time for the February 2025 deadline.
February 2, 2025: Prohibited Practices & AI Literacy
Six months after entry into force, the first substantive obligations took effect on February 2, 2025. This milestone targeted the most harmful AI practices and established baseline awareness requirements.
Prohibited AI Practices (Article 5)
The following AI practices became illegal across the EU:
- Social scoring: AI systems used by or on behalf of public authorities to evaluate or classify individuals based on social behavior or personal characteristics, where the resulting treatment is detrimental and disproportionate
- Subliminal and manipulative techniques: AI that deploys techniques beyond a person's consciousness or purposefully manipulative methods to distort behavior in a way that causes significant harm
- Exploitation of vulnerabilities: AI systems that exploit age, disability, or socioeconomic vulnerability to manipulate behavior harmfully
- Biometric categorization on sensitive attributes: Systems that infer race, political opinions, trade union membership, religious beliefs, or sexual orientation from biometric data
- Untargeted facial-image scraping: Building or expanding facial-recognition databases through untargeted scraping from the internet or CCTV
- Emotion inference in workplaces and education: AI that infers emotions of individuals at work or in educational settings (except for medical or safety reasons)
- Real-time biometric identification in public spaces: Used by law enforcement, subject to narrowly defined exceptions
These prohibitions are already in effect. Organizations using AI-powered employee sentiment analysis, emotion-detection hiring tools, or social-behavioral scoring systems must have discontinued these practices. Violations of prohibited-practices rules carry the highest penalties: up to EUR 35 million or 7% of global turnover.
AI Literacy (Article 4)
Alongside the prohibited practices, Article 4 became applicable, requiring providers and deployers to ensure that their staff and other persons dealing with the operation and use of AI systems have a sufficient level of AI literacy. This is a broad obligation — it applies to all providers and deployers, regardless of the risk level of their AI systems.
AI literacy means ensuring people understand:
- What AI systems are and how they generally work
- The potential risks and limitations of AI outputs
- How to use AI systems appropriately within their role
- The organization's AI governance policies and their individual responsibilities
Who Was Affected
Every organization that provides or deploys AI systems in the EU. This applies regardless of AI system risk category. Even organizations with only minimal-risk AI systems must meet AI literacy requirements and ensure they are not engaged in any prohibited practices.
August 2, 2025: GPAI & Governance
The next major milestone is August 2, 2025, when General-Purpose AI (GPAI) model obligations and governance structures come into effect.
General-Purpose AI Model Obligations (Chapter V)
GPAI models — AI models trained on broad data that can serve many purposes, including large language models (LLMs) like GPT-4 and similar systems — must comply with specific obligations:
All GPAI Model Providers Must:
- Prepare technical documentation: Document model architecture, training methodology, computational resources used, training data summaries, and known limitations
- Provide information to downstream providers: Supply sufficient information for downstream AI system providers to understand the model's capabilities and integrate it compliantly
- Copyright compliance: Implement policies to comply with EU copyright law, including the text and data mining opt-out mechanism
- Publish training data summaries: Make publicly available a sufficiently detailed summary of the content used for training the GPAI model, following a template published by the AI Office
GPAI Models with Systemic Risk: Additional Obligations
GPAI models that meet the "systemic risk" threshold — generally those trained with more than 10^25 FLOPs of computation, or as designated by the Commission based on capabilities assessment — face additional requirements:
- Model evaluation: Perform and document standardized model evaluations, including adversarial testing
- Risk assessment and mitigation: Assess and mitigate possible systemic risks, including how the model can be used maliciously
- Cybersecurity protections: Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure
- Incident tracking and reporting: Track, document, and report serious incidents to the AI Office and relevant national authorities without undue delay
- Energy consumption reporting: Report estimated energy consumption of the model
Governance Bodies Established
By this date, the EU's AI governance structure must be operational:
- AI Office: The central EU body responsible for GPAI oversight, enforcement coordination, and developing codes of practice
- AI Board: Composed of member state representatives, advising the Commission on AI Act implementation and ensuring consistent application
- Advisory Forum: A stakeholder group providing technical expertise from industry, civil society, and academia
- Scientific Panel: Independent experts supporting the AI Office on GPAI model evaluation and systemic-risk assessment
Codes of Practice
The AI Office is developing codes of practice for GPAI model providers, which are expected to be finalized around this date. These codes provide detailed guidance on how to meet the Chapter V obligations in practice. While not legally binding in themselves, compliance with approved codes of practice creates a presumption of conformity with the underlying regulatory requirements.
August 2, 2026: High-Risk AI & Penalties
August 2, 2026 is the most significant compliance deadline for most organizations. This is when the bulk of the AI Act's obligations take effect.
What Applies
High-Risk AI System Requirements (Annex III)
Full compliance required for stand-alone high-risk AI systems in areas including:
- Biometrics — remote biometric identification (where permitted), biometric categorization
- Critical infrastructure — AI managing traffic, energy, water, heating supply
- Education and training — determining access, evaluating outcomes, monitoring behavior
- Employment and workers management — recruitment, screening, evaluation, promotion, termination, task allocation
- Essential private and public services — creditworthiness, insurance risk, emergency services
- Law enforcement — evidence assessment, profiling, risk assessment, polygraphs
- Migration, asylum, and border control — risk assessment, document verification, application processing
- Administration of justice — AI assisting courts in researching and applying law
Provider Obligations for High-Risk Systems
Providers must have in place:
- Risk management system (continuous, iterative, lifecycle-spanning)
- Data governance (training data quality, relevance, bias examination)
- Technical documentation (comprehensive, pre-market, maintained)
- Automatic logging capabilities (traceability throughout lifetime)
- Transparency and instructions for use (enabling deployers to understand and use correctly)
- Human oversight design (ability to understand, monitor, intervene, override)
- Accuracy, robustness, and cybersecurity measures
- Conformity assessment (self or third-party, depending on category)
- CE marking and EU database registration
- Quality management system
- Post-market monitoring system
Deployer Obligations
Deployers of high-risk AI must:
- Use systems according to instructions for use
- Assign competent, authorized human overseers
- Monitor operations and report serious incidents
- Conduct fundamental-rights impact assessments (for specified deployers)
- Inform affected individuals they are subject to high-risk AI
- Keep logs generated by the system for the prescribed period
Penalty Provisions Fully Applicable
All penalty provisions become enforceable against all actors, covering:
| Violation Type | Maximum Penalty | Percentage of Turnover |
|---|---|---|
| Prohibited AI practices | EUR 35 million | 7% of global annual turnover |
| High-risk non-compliance | EUR 15 million | 3% of global annual turnover |
| Misleading information to authorities | EUR 7.5 million | 1% of global annual turnover |
The applicable amount is the higher of the fixed figure or the percentage of turnover. SMEs and startups benefit from proportionate caps.
Other August 2026 Obligations
- Transparency obligations for limited-risk AI systems (chatbot disclosure, deepfake labeling)
- Registration obligations for high-risk AI in the EU database
- Serious incident reporting obligations for providers and deployers
- National market surveillance authorities fully operational
August 2, 2027: Full Enforcement
The final milestone — August 2, 2027 — brings the AI Act to full maturity. By this date, every provision of the regulation applies to every actor and AI system category.
Annex I High-Risk AI Systems
AI systems that are safety components of — or are themselves — products covered by existing EU harmonization legislation (listed in Annex I) must now meet all high-risk requirements. This includes AI embedded in:
- Medical devices (Regulation 2017/745 and 2017/746)
- Machinery and equipment (Machinery Regulation)
- Toys (Toy Safety Directive)
- Lifts (Lifts Directive)
- Radio equipment (Radio Equipment Directive)
- Motor vehicles (Vehicle type-approval regulations)
- Civil aviation (EASA regulations)
- Marine equipment (Marine Equipment Directive)
- Railway systems (Rail interoperability directives)
The additional year was provided because these products are already subject to existing EU safety-assessment regimes, and integrating AI Act conformity assessments into those processes requires coordination between multiple regulatory frameworks.
Full Enforcement Landscape
By August 2027, the regulatory landscape is fully operational:
- All risk categories enforced with penalties
- Harmonized standards published and in use
- EU database operational with mandatory registrations
- Market surveillance authorities conducting inspections
- Codes of practice finalized for all relevant areas
- Post-market surveillance and vigilance systems functioning
- Cross-border enforcement cooperation mechanisms active
What This Means by Operator Role
Different actors in the AI value chain face different timelines. Here is a role-based summary of when key obligations become applicable.
If You Are a Provider (Developer)
| Date | Your Obligations |
|---|---|
| Feb 2025 | Stop any prohibited AI practices; ensure staff AI literacy |
| Aug 2025 | If you provide GPAI models: documentation, copyright compliance, training data summaries. If systemic risk: adversarial testing, incident reporting |
| Aug 2026 | Full compliance for Annex III high-risk systems: risk management, data governance, documentation, logging, human oversight, conformity assessment, CE marking, EU database registration |
| Aug 2027 | Full compliance for Annex I products with embedded AI: integrate AI Act conformity with existing sectoral assessments |
If You Are a Deployer (User)
| Date | Your Obligations |
|---|---|
| Feb 2025 | Stop any prohibited AI use; ensure staff AI literacy |
| Aug 2026 | High-risk deployer obligations: follow instructions for use, assign human overseers, monitor and report, conduct fundamental-rights impact assessments (where required), inform affected individuals |
| Aug 2027 | Deployer obligations for AI in regulated products |
If You Are an Importer or Distributor
Importers and distributors must verify provider compliance before placing AI systems on the EU market. Their obligations align with the timelines for the relevant risk category — primarily from August 2026 for high-risk systems. They must also ensure traceability, cooperate with market surveillance authorities, and take corrective action if non-conformities are identified.
Preparing Your Organization for Each Phase
A timeline is only useful if you can act on it. Here is a phase-by-phase preparation guide for organizations that want to stay ahead of each deadline.
Phase 1: Immediate (Already Active)
- AI system inventory: If you have not done this yet, it is overdue. Catalogue every AI system you develop, provide, or deploy — including third-party tools, embedded AI components, and internal automation
- Prohibited-practices audit: Cross-check your inventory against Article 5. Discontinue any system or practice that falls within the prohibited categories
- AI literacy program: Develop and deliver training appropriate to staff roles. This can be role-based: executives need governance awareness, developers need technical ethics training, and end-users need appropriate-use guidance
Phase 2: Before August 2025
- GPAI assessment: Determine whether you provide GPAI models. If so, begin preparing technical documentation, training-data summaries, and copyright-compliance evidence
- Systemic-risk evaluation: If your GPAI model exceeds the computational threshold or has capabilities that may pose systemic risk, engage with the AI Office and prepare for additional obligations
- Monitor codes of practice: Track the AI Office's code-of-practice development process. Early engagement helps shape these codes and gives you a head start on compliance
Phase 3: Before August 2026
- Risk classification: Finalize the risk classification for every AI system. Engage legal counsel for borderline cases — the difference between high-risk and limited-risk has massive compliance implications
- Implement high-risk requirements: For each high-risk AI system, implement the full set of provider or deployer obligations. This is a 12-18 month effort for most organizations
- Conformity assessment preparation: Identify which conformity-assessment procedure applies (self-assessment vs. notified-body assessment). Engage notified bodies early — capacity constraints are expected
- Documentation build: Create technical documentation, instructions for use, and conformity declarations. Use harmonized standards where available as evidence of compliance
- Consider ISO 42001: Organizations that implement an AI Management System aligned with ISO 42001 will find that many EU AI Act requirements are already addressed by the standard's framework
Phase 4: Before August 2027
- Sectoral integration: For AI in regulated products, coordinate AI Act conformity assessment with existing product-safety requirements. This may require engaging with both AI-specific and sector-specific notified bodies
- Supply-chain alignment: Ensure that all component suppliers, including AI model providers, provide the documentation and information needed for integrated conformity assessment
- Continuous monitoring: By this date, post-market monitoring and vigilance systems should be mature, with established processes for incident detection, reporting, and corrective action
The organizations that will navigate the EU AI Act most effectively are those that start preparing now — not when deadlines are imminent. Every month of early preparation reduces the cost and disruption of compliance later.
Frequently Asked Questions
When did the EU AI Act enter into force?
The EU AI Act entered into force on August 1, 2024. However, most substantive obligations do not apply immediately — the regulation uses a phased timeline with different requirements activating over a three-year period from 2024 to 2027.
What obligations are already in effect?
As of February 2, 2025, prohibited AI practices are enforceable and AI literacy requirements apply. Organizations must have stopped using banned AI systems such as social scoring, subliminal manipulation, and workplace emotion inference.
When do high-risk AI obligations apply?
Most high-risk AI obligations apply from August 2, 2026 for stand-alone systems in Annex III areas (employment, credit, etc.). High-risk AI embedded in regulated products (medical devices, machinery) has until August 2, 2027.
What happens on August 2, 2025?
GPAI model obligations take effect, including documentation, copyright compliance, and transparency measures. GPAI models with systemic risk face additional requirements. The AI governance structure (AI Office, AI Board, codes of practice) also becomes operational.
Is there a grace period for existing AI systems?
For high-risk AI systems already on the market before August 2, 2026, there is a transitional provision — they are not required to comply immediately unless significantly modified. However, prohibited AI practices have no grace period and must be discontinued immediately.