Introduction: The EU AI Act Is Here
The European Union's Artificial Intelligence Act (EU AI Act) represents the world's first comprehensive legal framework for artificial intelligence. Entered into force on August 1, 2024, this landmark regulation establishes a risk-based approach to AI governance that will reshape how organizations develop, deploy, and use AI systems.
Whether you're a technology provider, enterprise deployer, or service company using AI tools, understanding the implementation timeline is critical for compliance planning. This article breaks down the key dates, obligations, and actions you need to take.
Understanding the Risk-Based Approach
The EU AI Act categorizes AI systems into four risk levels, each with different compliance requirements:
| Risk Level | Examples | Requirements |
|---|---|---|
| Unacceptable | Social scoring, manipulative AI, real-time biometric identification | Prohibited (with limited exceptions) |
| High-Risk | HR/recruitment AI, credit scoring, medical devices, critical infrastructure | Conformity assessment, documentation, human oversight, transparency |
| Limited Risk | Chatbots, emotion recognition, deepfake generators | Transparency obligations (disclosure) |
| Minimal/No Risk | Spam filters, AI in video games | No mandatory requirements (voluntary codes encouraged) |
Phased Implementation Timeline
The EU AI Act follows a staggered implementation schedule, giving organizations time to prepare for different requirements:
February 2, 2025 — Prohibited AI Practices
The first deadline brings the prohibition of unacceptable-risk AI systems:
- Social scoring systems by public authorities
- Exploitative AI targeting vulnerable groups
- Subliminal manipulation techniques that cause harm
- Biometric categorization based on sensitive attributes
- Untargeted facial recognition database scraping
- Emotion inference in workplaces and educational institutions
- Real-time remote biometric identification in public spaces (with exceptions for law enforcement)
August 2, 2025 — General-Purpose AI (GPAI) Rules
Obligations for providers of general-purpose AI models take effect:
- Technical documentation requirements
- Copyright compliance and training data transparency
- Additional requirements for GPAI with systemic risk (e.g., frontier models)
- National competent authority notifications
August 2, 2026 — High-Risk AI Systems (New Products)
Full compliance required for high-risk AI systems under Annex III:
- Risk management systems
- Data governance requirements
- Technical documentation
- Record-keeping and logging
- Transparency and human oversight
- Accuracy, robustness, and cybersecurity
- Conformity assessments and CE marking
August 2, 2027 — High-Risk AI in Regulated Products
AI systems embedded in products covered by EU harmonization legislation (medical devices, machinery, vehicles, etc.) must comply with high-risk requirements.
What Actions Should Your Organization Take?
Immediate Actions (Now – Q1 2025)
- AI System Inventory: Catalog all AI systems in use or development
- Risk Classification: Determine which risk category each system falls into
- Prohibited Use Review: Identify and eliminate any prohibited AI practices
- Governance Framework: Establish AI governance structures and responsibilities
Medium-Term Actions (2025)
- GPAI Compliance: If providing foundation models, prepare documentation and transparency measures
- High-Risk Preparation: Begin conformity assessment preparation for high-risk systems
- Training Programs: Develop AI literacy training for relevant staff
- Vendor Assessment: Evaluate AI vendors for compliance capabilities
Long-Term Actions (2026+)
- Conformity Assessments: Complete assessments for high-risk systems
- Continuous Monitoring: Implement post-market surveillance
- Documentation Maintenance: Keep technical documentation up to date
- Incident Reporting: Establish procedures for serious incident reporting
Penalties for Non-Compliance
The EU AI Act establishes significant penalties for violations:
- Prohibited AI violations: Up to €35 million or 7% of global annual turnover
- High-risk compliance failures: Up to €15 million or 3% of global annual turnover
- Incorrect information to authorities: Up to €7.5 million or 1% of global annual turnover
SMEs and startups benefit from proportionate penalties capped at the lower of the fixed amount or percentage.
How ISO 42001 Supports EU AI Act Compliance
Organizations already implementing ISO 42001 (AI Management System) have a significant advantage. The standard's framework aligns well with EU AI Act requirements:
- Risk management processes map to high-risk AI requirements
- Documentation controls support technical documentation obligations
- Impact assessments align with conformity assessment needs
- Governance structures facilitate accountability requirements
Conclusion
The EU AI Act's implementation timeline gives organizations a structured path to compliance, but early action is essential. Starting with an AI inventory and risk classification now will position your organization to meet deadlines and avoid penalties.
As your ISO 42001 certification partner, Glocert can help you build the governance framework and documentation systems needed for both AI management excellence and EU AI Act compliance.