In This Article
- The EU AI Act imposes penalties up to EUR 35 million or 7% of global turnover for prohibited-practice violations — among the highest in any EU regulation
- Three penalty tiers map directly to violation severity: prohibited practices, high-risk non-compliance, and misleading information to authorities
- Enforcement combines GDPR-style data-protection authority models with product-safety market surveillance — a dual mechanism unfamiliar to many compliance teams
- Audit-ready evidence requires three layers: documentation (policies, assessments), implementation (working controls), and operating evidence (logs, records over time)
- Organizations with existing ISO 42001 or ISO 27001 frameworks have a significant head start on evidence readiness
The Three-Tier Penalty Structure
The EU AI Act establishes one of the most severe penalty regimes in EU regulatory history. The structure is intentionally designed to be proportionate: the more harmful the violation, the higher the potential fine. Understanding these tiers is essential because they define the financial risk your organization faces and — by extension — the business case for compliance investment.
| Tier | Violation Type | Fixed Maximum | Turnover % | Applied Amount |
|---|---|---|---|---|
| Tier 1 | Prohibited AI practices (Article 5) | EUR 35 million | 7% of global annual turnover | Whichever is higher |
| Tier 2 | Non-compliance with high-risk requirements and most other obligations | EUR 15 million | 3% of global annual turnover | Whichever is higher |
| Tier 3 | Supplying incorrect, incomplete, or misleading information to authorities or notified bodies | EUR 7.5 million | 1% of global annual turnover | Whichever is higher |
For context, a company with EUR 500 million in global annual turnover faces a maximum Tier 1 penalty of EUR 35 million (since 7% of EUR 500M equals EUR 35M). But for a company with EUR 1 billion in turnover, the Tier 1 maximum rises to EUR 70 million (7% of EUR 1B). For the largest global technology companies, Tier 1 penalties could theoretically reach billions of euros.
SME and Startup Proportionality
The AI Act provides an important safeguard for smaller organizations. For SMEs (including startups), the penalty is capped at the lower of the fixed amount or the turnover percentage — the inverse of the rule for large enterprises. Additionally, member states must take into account the organization's economic viability when determining the final penalty amount. This proportionality mechanism ensures that penalties are meaningful without being existentially threatening to smaller innovators.
Prohibited AI Violations (Tier 1)
Tier 1 penalties — the most severe — apply to violations of Article 5, which prohibits specific AI practices deemed to pose an unacceptable risk to fundamental rights, safety, and democratic values.
What Triggers Tier 1 Penalties
- Deploying a social-scoring system: Using AI to evaluate or classify people based on social behavior or personal characteristics for purposes that are detrimental and disproportionate
- Using subliminal or manipulative AI: Deploying AI systems that distort human behavior through techniques beyond conscious awareness, causing significant harm
- Exploiting vulnerabilities: Operating AI that targets people's age, disability, or socioeconomic vulnerability to manipulate their behavior harmfully
- Biometric categorization on sensitive attributes: Categorizing individuals by inferring sensitive characteristics (race, religion, sexual orientation, political views) from biometric data
- Untargeted facial-recognition scraping: Building or expanding facial-recognition databases by scraping images from the internet or surveillance cameras without targeted legal basis
- Workplace or educational emotion inference: Using AI to infer the emotions of workers or students, except where used for medical or safety purposes
- Unauthorized real-time biometric identification: Using real-time remote biometric identification in public spaces for law enforcement, outside of the narrowly defined exceptions
Prohibited-practices penalties have been enforceable since February 2, 2025. Organizations that continue operating banned AI systems are already exposed to maximum penalties. There is no grace period, no transition provision, and no "first-offense" reduction for prohibited AI practices.
High-Risk Non-Compliance (Tier 2)
Tier 2 penalties cover the broadest range of violations, including failure to meet the requirements applicable to high-risk AI systems and most other substantive obligations under the regulation.
Common Tier 2 Violations
- Risk management failures: Not implementing or maintaining a continuous risk management system for high-risk AI
- Data governance deficiencies: Failing to ensure training, validation, and testing datasets meet quality, relevance, and bias-examination requirements
- Documentation gaps: Missing or incomplete technical documentation that prevents authorities from assessing compliance
- Logging failures: AI systems that do not automatically record events for traceability
- Transparency violations: Not providing adequate instructions for use to deployers
- Human oversight deficiencies: Designing systems that do not allow for effective human oversight, intervention, or override
- Accuracy and robustness failures: Not meeting appropriate levels of accuracy, robustness, and cybersecurity
- Conformity assessment avoidance: Placing high-risk AI on the market without completing the required conformity assessment
- CE marking violations: Affixing CE marking without conformity, or failing to affix it when required
- Registration failures: Not registering high-risk AI systems in the EU database
- GPAI non-compliance: GPAI model providers failing to meet Chapter V requirements
- Deployer obligation failures: Deployers not using systems according to instructions, not assigning human overseers, or not conducting required impact assessments
Aggravating and Mitigating Factors
When determining the actual penalty amount within the maximum, authorities consider several factors:
- Nature, gravity, and duration of the infringement
- Whether the organization acted intentionally or negligently
- Actions taken to mitigate harm to affected persons
- Degree of cooperation with authorities
- Previous infringements by the same operator
- The manner in which the infringement became known to the authority (self-reporting is viewed favorably)
- The size and market share of the organization
Misleading Information Penalties (Tier 3)
Tier 3 addresses a specific and often overlooked risk: providing incorrect, incomplete, or misleading information to national competent authorities or notified bodies during compliance processes.
What Constitutes Misleading Information
- Submitting false or inaccurate data in conformity-assessment documentation
- Providing incomplete responses to authority information requests
- Misleading notified bodies during third-party conformity assessments
- Failing to disclose known non-conformities during investigations
- Submitting inaccurate information to the EU high-risk AI database
While Tier 3 carries the lowest maximum penalty, organizations should not underestimate its significance. Misleading-information violations often compound Tier 2 violations — an organization that fails to comply with high-risk requirements and misrepresents its compliance status to authorities may face penalties under both tiers simultaneously.
Enforcement Mechanisms
The EU AI Act introduces a dual enforcement model that combines elements from data-protection enforcement (similar to GDPR) with product-safety market surveillance (similar to CE-marking regimes). This hybrid approach is more comprehensive than either model alone.
The EU AI Office
Established within the European Commission, the AI Office has direct enforcement authority over GPAI model providers. It can:
- Request information and documentation from GPAI providers
- Conduct evaluations of GPAI models
- Request that providers take corrective measures
- Restrict or withdraw GPAI models from the market
- Impose fines for GPAI-specific violations
National Market Surveillance Authorities
Each EU member state must designate at least one market surveillance authority responsible for enforcing the AI Act within its territory. These authorities have broad powers:
- Information requests: Require organizations to provide documentation, access to data, and access to AI systems (including source code in justified cases)
- Inspections: Conduct on-site and remote inspections, including unannounced inspections
- Testing: Perform tests on AI systems to verify compliance with requirements
- Corrective orders: Order organizations to bring AI systems into compliance, withdraw products from the market, or recall systems already in use
- Prohibition orders: Prohibit the making available or putting into service of non-compliant AI systems
- Penalties: Impose administrative fines in accordance with the penalty framework
Cross-Border Enforcement
The AI Board — composed of member-state representatives — coordinates cross-border enforcement. Where an AI system is placed on the market in multiple member states, authorities cooperate through mutual assistance and joint investigations. The Commission can intervene to ensure consistent enforcement across the single market.
Market Surveillance Authorities
Market surveillance is a concept familiar in product-safety regulation but new to many compliance professionals from the data-protection or cybersecurity domains. Understanding how it works is essential for preparing your organization.
How Market Surveillance Differs from Data-Protection Enforcement
| Aspect | GDPR-Style Enforcement | Market Surveillance (AI Act) |
|---|---|---|
| Focus | Data processing activities | Product characteristics and safety |
| Trigger | Complaints, investigations, data breaches | Market monitoring, complaints, incident reports, spot checks |
| Evidence | Processing records, consent records, DPIAs | Technical documentation, conformity evidence, test results, logs |
| Remedies | Processing bans, fines | Product withdrawal, recall, prohibition, CE marking removal, fines |
| Pre-market | No pre-market approval | Conformity assessment required before market placement |
The key difference is that market surveillance authorities can — and do — order products off the market. For an AI system that has been deployed across your business operations, a withdrawal order can be far more disruptive than a fine.
How Inspections Work
While detailed inspection procedures vary by member state, the AI Act establishes minimum standards for how inspections and investigations are conducted.
Inspection Triggers
- Proactive monitoring: Authorities conduct routine market surveillance, sampling AI systems for compliance checks
- Complaints: Any person can submit a complaint about an AI system to the relevant authority
- Incident reports: Serious incidents reported by providers or deployers trigger investigations
- Cross-border referrals: Authorities in one member state can refer concerns to authorities in another
- AI Office referrals: For GPAI concerns, the AI Office can direct national authorities to investigate
The Inspection Process
A typical inspection follows these stages:
- Initial information request: The authority requests documentation, technical files, and access to the AI system. Organizations typically have a defined response period (often 15-30 days).
- Document review: Authorities review technical documentation, conformity declarations, risk assessments, and other compliance evidence against AI Act requirements.
- Technical evaluation: Where necessary, authorities conduct or commission technical testing of the AI system, including accessing and testing the system in operation.
- On-site inspection: Authorities may visit premises to verify that documented practices match actual operations, interview staff, and inspect systems in their operational environment.
- Findings and remediation: The authority communicates findings and, where non-compliance is identified, requires corrective action within a specified timeframe.
- Enforcement action: If corrective action is insufficient, authorities escalate to formal enforcement actions — penalties, withdrawal orders, or market prohibitions.
The AI Act explicitly allows authorities to access source code in justified cases, where necessary to verify compliance. This is a significant power. Organizations should prepare for this possibility by ensuring source code is well-documented, version-controlled, and that access can be provided in a controlled manner when legally required.
What "Audit-Ready" Evidence Looks Like
From an auditor's perspective — and as a certification body, Glocert sees this from both sides — "audit-ready" means more than having documents in a folder. It means having a coherent, traceable, and maintained evidence base that demonstrates ongoing compliance across three dimensions.
The Three Evidence Layers
- Layer 1 — Documentary Evidence: Policies, procedures, risk assessments, impact assessments, technical documentation, conformity declarations, and instructions for use. This is what you have written down.
- Layer 2 — Implementation Evidence: Configured systems, deployed controls, trained personnel, established processes. This proves that what you documented is actually in place.
- Layer 3 — Operating Evidence: Logs, monitoring records, incident reports, management review minutes, audit findings, corrective-action records. This proves that what you implemented actually works over time.
An organization with strong Layer 1 but weak Layer 3 is a common audit finding — it means you wrote the policies but cannot prove they are being followed. Conversely, strong Layer 3 without Layer 1 (ad-hoc practices without documentation) makes it impossible for authorities to efficiently assess compliance.
The organizations that perform best under inspection are those that can produce evidence at all three layers within hours, not weeks. This requires an evidence management system, not a last-minute document collection exercise.
Evidence by Obligation Category
Here is what audit-ready evidence looks like for each major obligation area under the EU AI Act.
Risk Management (Article 9)
| Evidence Type | What Inspectors Expect |
|---|---|
| Risk management methodology | Documented, approved methodology covering identification, analysis, evaluation, and treatment of AI-specific risks |
| Risk register | Current risk register with AI risks including bias, fairness, safety, explainability, and adversarial resilience |
| Risk treatment plans | Documented treatment decisions (accept, mitigate, eliminate) with rationale and implementation evidence |
| Testing and validation records | Evidence that the AI system was tested against identified risks, with results documented and acted upon |
| Review records | Evidence of periodic risk reviews, especially after system changes or incidents |
Data Governance (Article 10)
- Data management policy: Documented approach to training, validation, and testing data, including quality criteria
- Data provenance records: Where data came from, how it was collected, and what transformations were applied
- Bias assessment: Analysis of training data for potential biases, with mitigation measures documented
- Data quality metrics: Ongoing measurement of data relevance, representativeness, completeness, and accuracy
- Dataset versioning: Evidence that datasets used for training and evaluation are version-controlled and reproducible
Technical Documentation (Article 11)
- System description: General description of the AI system including intended purpose, developer, version, and hardware/software requirements
- Design specification: How the system was designed, the logic and algorithms used, and key design choices
- Training methodology: Description of how the system was trained, including data used, optimization approaches, and hyperparameters
- Performance metrics: Validation and testing results documenting system accuracy, precision, recall, and other relevant metrics
- Known limitations: Documented foreseeable circumstances in which the system may not perform as intended
- Version history: Record of significant changes and their impact on system behavior
Human Oversight (Article 14)
- Oversight design documentation: How the AI system enables human oversight — what information it presents to human operators, what intervention capabilities exist
- Role assignments: Named individuals assigned to oversight roles with documented authority to intervene and override
- Training records: Evidence that oversight personnel understand the system's capabilities, limitations, and how to interpret outputs
- Override logs: Records of instances where human operators intervened or overrode AI outputs, demonstrating that oversight mechanisms are operational
- Escalation procedures: Documented procedures for when and how to escalate AI decisions to higher authority
Transparency and Instructions for Use (Article 13)
- Instructions for use: Clear, accessible documentation enabling deployers to understand and use the system correctly
- Capability disclosure: Accurate description of what the system can and cannot do, including performance boundaries
- Intended-purpose specification: Precise description of intended use cases and any restrictions on use
- User notification evidence: For deployers — evidence that affected individuals are informed they are subject to AI-assisted decisions
Post-Market Monitoring (Article 72)
- Monitoring plan: Documented plan for how the system will be monitored after deployment
- Performance dashboards: Evidence of ongoing monitoring showing accuracy, drift, and anomaly detection
- Incident records: Log of incidents, near-misses, and complaints related to the AI system
- Corrective actions: Records of corrective measures taken in response to monitoring findings or incidents
- Serious incident reports: Evidence that serious incidents were reported to authorities within the required timeframe
Building Your Evidence Pack
Building an audit-ready evidence pack is not a one-time project — it is an ongoing practice. Here is a practical approach for organizations starting from scratch or maturing their existing practices.
Step 1: Establish Your Evidence Architecture
Decide how evidence will be organized, stored, and maintained. Key decisions include:
- Central evidence repository: A single, accessible location for all compliance evidence — document management system, GRC platform, or structured file system
- Naming conventions: Consistent naming that maps to AI Act article numbers and requirements
- Version control: All documents version-controlled with change history
- Access controls: Appropriate access restrictions while ensuring designated staff can locate and retrieve evidence quickly
Step 2: Map Requirements to Evidence
Create a compliance matrix mapping each applicable AI Act requirement to:
- The specific evidence that demonstrates compliance
- The owner responsible for creating and maintaining that evidence
- The review frequency (how often evidence is updated or regenerated)
- The current status (available, in progress, gap)
Step 3: Generate Evidence Systematically
Avoid the temptation to create compliance evidence as a standalone documentation project. Instead, embed evidence generation into existing business processes:
- Risk assessments should produce documented outputs as a natural part of the process, not as an afterthought
- Model development pipelines should automatically generate technical documentation and test records
- Monitoring systems should produce dashboards and reports that serve as operating evidence
- Management reviews should produce minutes that document governance decisions
Step 4: Test Your Evidence Readiness
Before an inspection occurs, test your ability to produce evidence under time pressure:
- Mock inspections: Simulate an information request and measure how long it takes to assemble a complete response
- Evidence gap analysis: Review each requirement against your evidence matrix and identify gaps
- Cross-functional walkthrough: Verify that different teams (development, operations, legal, compliance) can each contribute their portion of the evidence pack coherently
Step 5: Maintain Continuously
Evidence degrades. Risk assessments become outdated when systems change. Training records expire. Monitoring data accumulates but is not reviewed. Establish maintenance rhythms:
- Monthly: Review monitoring evidence and incident logs
- Quarterly: Update risk assessments and review compliance matrix status
- Annually: Comprehensive review of all evidence, management review, and internal audit
- Event-driven: Update evidence whenever significant changes occur to AI systems, organizational structure, or regulatory requirements
Organizations that implement ISO 42001 (AI Management System) build much of this evidence infrastructure naturally. The standard requires documented risk assessments, impact assessments, management reviews, internal audits, and corrective action processes — all of which map directly to EU AI Act evidence requirements. Certification provides an additional layer of independent assurance that governance is functioning as documented.
Frequently Asked Questions
What are the maximum penalties under the EU AI Act?
The EU AI Act has three penalty tiers: up to EUR 35 million or 7% of global annual turnover for prohibited AI violations, up to EUR 15 million or 3% for high-risk non-compliance, and up to EUR 7.5 million or 1% for providing misleading information to authorities. The applicable amount is the higher of the fixed sum or the percentage.
Who enforces the EU AI Act?
Enforcement is shared between national market surveillance authorities in each member state (for high-risk AI and general obligations) and the EU AI Office (for GPAI model obligations). Each member state must designate at least one market surveillance authority and one notifying authority.
What evidence do inspectors look for?
Inspectors look for three categories: documentary evidence (policies, risk assessments, technical documentation), implementation evidence (working controls, configured systems, trained personnel), and operating evidence (logs, monitoring records, incident reports, management reviews).
Are penalties reduced for SMEs?
Yes. For SMEs and startups, the penalty is capped at the lower of the fixed amount or the turnover percentage (reversed from the large-enterprise rule). Member states must also consider economic viability when determining fines.
How is EU AI Act enforcement different from GDPR?
While both use significant penalties and extraterritorial reach, the AI Act adds product-safety mechanisms: market surveillance, CE marking, conformity assessments, and the ability to order product withdrawal. GDPR focuses on data processing; the AI Act focuses on AI system safety and governance throughout the product lifecycle.