Key Takeaways
  • Chapter V of the EU AI Act creates a two-tier framework for general-purpose AI (GPAI) models: core obligations for all GPAI models and additional obligations for those with systemic risk
  • All GPAI model providers must comply with transparency requirements, provide technical documentation, and respect EU copyright law — obligations effective from August 2, 2025
  • GPAI models trained with more than 10^25 FLOPs are presumed to carry systemic risk and face additional obligations including adversarial testing, incident reporting, and cybersecurity measures
  • Open-source GPAI models enjoy a partial exemption from documentation and transparency obligations, but not from copyright compliance or systemic risk obligations
  • Codes of practice, facilitated by the AI Office, provide a practical pathway to demonstrate compliance until harmonised standards are published

What is a GPAI Model?

A general-purpose AI (GPAI) model, as defined in Article 3(63) of the EU AI Act, is an AI model — including where such a model is trained with a large amount of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and which can be integrated into a variety of downstream systems or applications.

In practical terms, GPAI models include:

  • Large language models (LLMs): GPT-4, Claude, Gemini, Llama, Mistral, and similar text-generation models
  • Multimodal foundation models: Models processing and generating across multiple modalities (text, image, audio, video)
  • Large-scale image generation models: DALL-E, Stable Diffusion, Midjourney, and similar systems
  • Code generation models: Models capable of writing, analysing, and debugging code across programming languages
  • Speech and audio models: Large-scale models for speech recognition, synthesis, and audio generation

The defining characteristic is generality — the model can be adapted or used for a wide range of downstream applications, rather than being designed for a single specific task. This is what distinguishes GPAI models from narrow AI systems trained for one purpose.

GPAI Model vs GPAI System

The EU AI Act distinguishes between GPAI models (the underlying model) and GPAI systems (the model integrated into a downstream application). Chapter V governs GPAI models. When a GPAI model is integrated into a system that qualifies as high-risk under Annex I or III, that downstream system must also comply with the high-risk requirements of Chapter 2. The GPAI model provider's obligations and the downstream system provider's obligations are complementary, not substitutes.

GPAI Under the EU AI Act

Chapter V of the EU AI Act (Articles 51-56) establishes a dedicated regulatory framework for GPAI models. This was one of the most debated additions during the legislative process, prompted by the rapid rise of foundation models and large language models that did not fit neatly into the original risk-based classification framework.

Two-Tier Framework

The regulatory approach creates two tiers of obligations:

  1. Tier 1 — All GPAI models: Core transparency and documentation obligations that apply to every GPAI model placed on the EU market (Articles 53-54)
  2. Tier 2 — GPAI models with systemic risk: Additional obligations for GPAI models classified as presenting systemic risk, on top of the Tier 1 requirements (Article 55)

Timeline

GPAI obligations apply from August 2, 2025 — 12 months after the AI Act entered into force. This is an earlier deadline than the high-risk AI system requirements (August 2026), reflecting the urgency of addressing foundation model risks. Providers of GPAI models already on the market before August 2, 2025 have a transitional period until August 2, 2027 to comply.

Enforcement

The AI Office (established within the European Commission) has exclusive competence to supervise and enforce GPAI model obligations. This centralised enforcement model differs from the decentralised approach used for high-risk AI systems, where national market surveillance authorities play the primary role. Fines for GPAI-related infringements can reach up to 3% of global annual turnover or EUR 15 million, whichever is higher.

All GPAI Models: Core Obligations

Article 53 establishes the baseline obligations that apply to all GPAI model providers, regardless of whether the model presents systemic risk. These are the minimum requirements every GPAI model provider must meet.

Obligation 1: Technical Documentation

Providers must draw up and keep up to date technical documentation of the GPAI model, including its training and testing process and the results of its evaluation. The documentation must contain, at a minimum:

  • A general description of the GPAI model, including the tasks it is intended to perform and the type and nature of the AI system
  • The name and contact details of the provider
  • A description of how the model interacts or can be used to interact with hardware or software that is not part of the model itself
  • The version and release date of the GPAI model
  • The modalities (text, image, etc.) of inputs and outputs
  • The licence under which the model is made available
  • A description of computational resources used during training (including type, amount, and compute measured in FLOPs)
  • Information about the data used for training, testing, and validation, including the type and provenance of data and curation methodologies

Obligation 2: Information to Downstream Providers

GPAI model providers must make available to downstream providers (those integrating the GPAI model into their own AI systems) adequate information and documentation to enable them to understand the capabilities and limitations of the GPAI model, and to comply with their own obligations under the AI Act. This includes information necessary for downstream providers to comply with Chapter 2 if they build high-risk AI systems on top of the GPAI model.

Obligation 3: Copyright Compliance Policy

Providers must put in place a policy to comply with EU copyright law, in particular Directive (EU) 2019/790, including the text and data mining exception in Article 4. This requires a good-faith effort to identify and respect rights holders' opt-outs from text and data mining.

Obligation 4: Training Data Summary

Providers must draw up and make publicly available a sufficiently detailed summary about the content used for training the GPAI model. This summary is published according to a template provided by the AI Office and is intended to help copyright holders understand whether their content may have been used in training.

Transparency Requirements

Transparency is a foundational principle of GPAI regulation. The requirements serve multiple purposes: enabling downstream providers to build compliant systems, enabling authorities to assess risks, and enabling rights holders to enforce their copyright.

Technical Documentation Transparency

The technical documentation for GPAI models must be made available to the AI Office and national competent authorities upon request. While the full documentation is not published, the training data summary is publicly accessible. The documentation requirements are specified in Annex XI of the AI Act and refined through implementing acts.

Downstream Provider Transparency

GPAI model providers must be transparent with downstream providers about the model's capabilities, limitations, and appropriate use cases. This includes:

  • Clear documentation of what the model can and cannot do
  • Known limitations, biases, and failure modes
  • Recommended and prohibited use cases
  • Integration guidelines and safety considerations
  • Information needed for downstream providers to fulfil their own regulatory obligations (particularly if they build high-risk AI systems)

Content Labelling for Generative AI

For GPAI models that generate synthetic content (text, images, audio, video), providers must ensure the outputs are marked in a machine-readable format as AI-generated. This supports the broader transparency and deep-fake labelling requirements in Article 50 of the AI Act.

The intersection of GPAI training and copyright law is one of the most significant legal considerations. The EU AI Act imposes specific obligations beyond what existed under prior copyright legislation.

Copyright Compliance Policy

GPAI model providers must establish and implement a policy to respect EU copyright law. Practically, this means:

  • Identifying training data sources: Maintaining a record of the sources of training data and the legal basis for using them
  • Respecting opt-outs: Implementing technical measures to identify and honour rights holders' reservations of rights under Article 4(3) of Directive 2019/790 (the text and data mining opt-out). This includes crawling robots.txt, meta tags, and other machine-readable opt-out signals
  • State-of-the-art compliance: Using state-of-the-art technologies, including watermarking and fingerprinting, where appropriate to respect copyright

Training Data Summary

The publicly available training data summary must provide a sufficiently detailed overview of the content used for training, without revealing trade secrets or confidential business information. The AI Office publishes a template for this summary. Key elements include:

  • Categories and types of data sources used
  • Aggregate data about the content (e.g., domains, languages, data types)
  • Methodology for data collection and curation
  • Description of how copyright-protected content was identified and handled
Copyright Applies to All GPAI Models

The copyright compliance obligation applies to all GPAI models, including open-source models. There is no exemption. Even if a provider benefits from the open-source exemption for technical documentation and transparency, the copyright policy and training data summary obligations remain fully applicable.

GPAI with Systemic Risk

Article 51 establishes the concept of GPAI models with systemic risk — a special category carrying additional obligations due to the scale and potential impact of these models.

What is Systemic Risk?

A GPAI model presents systemic risk when it has high-impact capabilities. Systemic risks are risks specific to the high-impact capabilities of GPAI models, having a significant effect on the Union market due to their reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole, that can be propagated at scale across the value chain.

Systemic risk reflects the concern that the most powerful AI models can generate risks that are not confined to a single use case or deployment but can propagate through the economy and society — affecting many downstream applications and potentially millions of users simultaneously.

Systemic Risk Thresholds

The EU AI Act establishes a quantitative presumption for systemic risk classification:

The 10^25 FLOPs Threshold

A GPAI model is presumed to present systemic risk if the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). This threshold was calibrated based on the training compute of models available at the time of legislation (GPT-4-class models were estimated near or above this threshold).

Understanding 10^25 FLOPs

10^25 FLOPs is approximately 10 million petaFLOP-days of computation. For context, training GPT-3 (175B parameters) required approximately 3.6 x 10^23 FLOPs — roughly 30 times below the threshold. GPT-4 is estimated to have required compute at or above the threshold. As hardware efficiency improves and training methods evolve, more models will approach or exceed this level.

Commission Designation Power

Beyond the computational threshold, the European Commission can designate a GPAI model as having systemic risk based on additional criteria:

  • Number of registered end users: Models with very high user bases may present systemic risk through their reach
  • High-impact capabilities: Models demonstrating capabilities that could pose risks when misused (e.g., advanced code generation, manipulation potential, scientific capability)
  • Degree of autonomy: Models with higher degrees of autonomous operation
  • Output modalities: Models capable of generating multiple types of content (text, image, audio, video) may have higher systemic impact

The Commission can update the FLOPs threshold through delegated acts as technology evolves. A provider can also proactively notify the AI Office that their model should be classified as presenting systemic risk.

Additional Obligations for Systemic-Risk GPAI Models

GPAI models with systemic risk must comply with all Tier 1 obligations plus the following additional requirements under Article 55:

Model Evaluation and Adversarial Testing

  • Perform model evaluation using standardised protocols and tools, including adversarial (red-teaming) testing, to identify and mitigate systemic risks
  • Adversarial testing must be proportionate to the level of risk and the state of the art, and may include the involvement of independent external experts
  • Testing should cover potential misuse scenarios, capability evaluations (including dangerous capabilities), and vulnerability assessments

Risk Assessment and Mitigation

  • Assess and mitigate possible systemic risks, including their sources, at Union level
  • Risk mitigation measures may include model design changes, safety mechanisms, deployment restrictions, or governance measures
  • Document the risk assessment process, findings, and mitigation measures taken

Incident Tracking and Reporting

  • Track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay
  • Serious incidents include events where the GPAI model contributes to risks to public health, safety, or fundamental rights at scale
  • Maintain records of all incidents and near-misses for regulatory review

Cybersecurity Protections

  • Ensure an adequate level of cybersecurity protection for the GPAI model and the physical infrastructure hosting it
  • Address AI-specific threats including model theft, model manipulation, prompt injection, data poisoning, and adversarial attacks
  • Implement access controls, monitoring, and incident response capabilities specific to the GPAI model infrastructure

Comparison: All GPAI vs Systemic Risk GPAI Obligations

Obligation All GPAI Models (Tier 1) GPAI with Systemic Risk (Tier 1 + 2)
Technical documentation Required (Annex XI) Required (Annex XI) — more extensive
Downstream provider information Required (Annex XII) Required (Annex XII) — more extensive
Copyright compliance policy Required Required
Training data summary Required (public) Required (public)
Content labelling (generative AI) Required Required
Model evaluation / red-teaming Not required Required — standardised adversarial testing
Systemic risk assessment Not required Required — assess and mitigate at Union level
Serious incident reporting Not required Required — report to AI Office without undue delay
Cybersecurity measures Not specifically required Required — adequate protection for model and infrastructure
Enforcement authority AI Office AI Office

Codes of Practice

Codes of practice are a central mechanism in the EU AI Act's GPAI framework. Article 56 tasks the AI Office with facilitating the development of codes of practice that operationalise the GPAI obligations.

Purpose and Legal Effect

Codes of practice serve several functions:

  • Practical guidance: They translate high-level obligations into specific, actionable compliance measures
  • Presumption of conformity: Adherence to a code of practice creates a presumption that the provider complies with the corresponding obligations — similar to how harmonised standards work for high-risk AI systems
  • Interim compliance pathway: Codes of practice fill the gap until harmonised standards for GPAI models are published, which may take several years
  • Not mandatory: Providers may choose alternative means of demonstrating compliance, but must then prove compliance through other evidence

Development Process

The AI Office coordinates the development of codes of practice with broad stakeholder involvement:

  • GPAI model providers (both large and small)
  • Downstream providers and deployers
  • Civil society organisations and rights holders
  • Academic and research institutions
  • National competent authorities

The first codes of practice were expected by May 2025, covering transparency obligations, copyright compliance, and systemic risk assessment. The AI Office has published initial drafts and conducted public consultations.

Practical Implications

For GPAI model providers, adherence to the codes of practice is the most pragmatic path to compliance in the near term. The codes will specify documentation templates, testing methodologies, copyright compliance procedures, and incident reporting formats. Engaging with the code development process — or at minimum, monitoring its progress — is essential for compliance planning.

Downstream Provider Obligations

Downstream providers — organisations that integrate GPAI models into their own AI systems — face a complex regulatory position. They must comply with their own obligations under the AI Act (which may include high-risk AI system requirements if their downstream system falls within Annex I or III) while relying on the GPAI model provider for upstream compliance.

When Downstream Systems Become High-Risk

If a downstream provider integrates a GPAI model into an AI system that qualifies as high-risk (e.g., a GPAI model used in a credit scoring application, a recruitment tool, or a medical device), the downstream provider becomes subject to the full Chapter 2 requirements for high-risk AI systems. The downstream provider must:

  • Comply with all Chapter 2 requirements (risk management, data governance, transparency, human oversight, etc.)
  • Undergo the applicable conformity assessment procedure
  • Issue a declaration of conformity and affix CE marking
  • Rely on the information provided by the GPAI model provider (under Annex XII) to fulfil documentation and transparency obligations

Shared Responsibility Model

The AI Act creates a shared responsibility between GPAI model providers and downstream providers:

  • GPAI model provider: Responsible for model-level obligations (technical documentation, transparency, copyright, systemic risk measures) and for providing downstream providers with necessary information
  • Downstream provider: Responsible for system-level compliance, including integration, deployment, and use-case-specific requirements
  • No transfer of liability: Using a compliant GPAI model does not exempt the downstream provider from their own obligations. The downstream provider must independently verify and demonstrate compliance for their specific use case
Due Diligence on GPAI Model Selection

Downstream providers should conduct due diligence on GPAI models before integration. Request the Annex XII information package from the GPAI model provider. Verify that the provider is in compliance with Chapter V. Assess whether the model's capabilities, limitations, and known biases are compatible with your intended high-risk use case. Document your evaluation — it forms part of your own compliance evidence.

Open-Source GPAI Exceptions

The EU AI Act provides a partial exemption for open-source GPAI models, reflecting the policy choice to support open innovation while maintaining essential safeguards.

What Qualifies as Open-Source?

For the purposes of the AI Act, a GPAI model is considered open-source if its parameters (weights), model architecture, and information on model usage are made publicly available. Simply publishing model weights without architecture information or usage documentation does not qualify.

Scope of the Exemption

Open-source GPAI models are exempt from:

  • Technical documentation requirements (Annex XI)
  • Downstream provider information requirements (Annex XII)

Open-source GPAI models are not exempt from:

  • Copyright compliance policy obligation
  • Training data summary publication
  • All systemic risk obligations (if the model crosses the systemic risk threshold)

Important Limitations

  • Systemic risk override: If an open-source GPAI model presents systemic risk (exceeds 10^25 FLOPs or is designated by the Commission), all obligations apply in full, including the additional Tier 2 requirements. The open-source exemption does not apply to systemic risk models
  • Downstream liability unchanged: The open-source exemption for the GPAI model provider does not affect the obligations of downstream providers who integrate the model into their own systems
  • Not a blanket exemption: Even exempt models must comply with copyright law and provide training data summaries

Practical Compliance Steps

Whether you are a GPAI model provider, a downstream provider, or an organisation using GPAI models, here are concrete steps to prepare for compliance.

For GPAI Model Providers

  1. Assess whether you are a GPAI model provider: Determine whether your model meets the definition in Article 3(63). Many fine-tuned or adapted models may not qualify as GPAI if they have been specialised for a narrow task
  2. Calculate training compute: Determine the total FLOPs used during training to assess whether the 10^25 threshold applies. If you are close to the threshold, prepare for systemic risk obligations proactively
  3. Prepare Annex XI technical documentation: Begin compiling the required technical documentation. Use the AI Office templates where available
  4. Establish a copyright compliance policy: Implement processes to identify and respect rights holders' opt-outs, and prepare the training data summary for public publication
  5. Prepare the Annex XII information package: Develop documentation that downstream providers need to fulfil their own obligations
  6. Implement content labelling: For generative models, ensure outputs can be marked as AI-generated in a machine-readable format
  7. Engage with codes of practice: Monitor and ideally participate in the code of practice development process. Plan to adopt the codes as your primary compliance pathway
  8. For systemic risk models: Establish red-teaming capabilities, incident reporting processes, and cybersecurity measures proportionate to the model's risk profile

For Downstream Providers

  1. Inventory GPAI model dependencies: Identify all GPAI models integrated into your AI systems and their providers
  2. Request Annex XII documentation: Obtain the information package from each GPAI model provider. If a provider cannot supply adequate documentation, assess the compliance risk of continued use
  3. Classify your downstream systems: Determine whether your systems qualify as high-risk under Annex I or III. If so, the full Chapter 2 requirements apply
  4. Conduct due diligence: Evaluate each GPAI model's suitability for your specific use case, considering known limitations, biases, and risk profile
  5. Build compliance documentation: Incorporate GPAI model information into your own technical documentation, risk assessments, and conformity evidence
  6. Plan for model changes: GPAI models are frequently updated. Establish a process to evaluate whether upstream model changes affect your system's compliance and trigger reassessment

For Organisations Using GPAI Systems

  1. Understand deployer obligations: If you deploy AI systems built on GPAI models, you have obligations under the AI Act as a deployer, including transparency to affected persons and, for high-risk systems, human oversight and impact assessment requirements
  2. Verify provider compliance: Confirm that both the GPAI model provider and the system provider are fulfilling their respective obligations
  3. Implement AI literacy: Ensure your staff have sufficient understanding of the AI systems they use to fulfil Article 4 AI literacy requirements (applicable from August 2025)

The GPAI framework represents a new frontier in AI regulation. Unlike high-risk AI system requirements, which build on decades of EU product-safety law, the GPAI provisions are being implemented for the first time globally. Early movers who invest in compliance infrastructure now will be best positioned as enforcement begins and as the regulatory framework matures through codes of practice and harmonised standards.

Frequently Asked Questions

What is a general-purpose AI (GPAI) model under the EU AI Act?

A GPAI model is an AI model — including where trained with a large amount of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks. It can be integrated into a variety of downstream systems or applications. Examples include large language models (LLMs), multimodal foundation models, and large-scale image generation models. The key characteristic is generality — the ability to be adapted for many different use cases rather than a single specific task.

When do GPAI obligations apply?

GPAI model obligations under Chapter V apply from August 2, 2025 — 12 months after the AI Act entered into force. Providers of GPAI models already on the EU market before this date have until August 2, 2027 to bring existing models into compliance. New models placed on the market after August 2, 2025 must comply from the outset.

What is the systemic risk threshold for GPAI models?

A GPAI model is presumed to have systemic risk if it was trained using total computing power exceeding 10^25 FLOPs (floating point operations). The European Commission can also designate additional models as having systemic risk based on criteria such as the number of end users, high-impact capabilities, degree of autonomy, or output modalities. The computational threshold may be updated through delegated acts as technology evolves.

Are open-source GPAI models exempt from the EU AI Act?

Partially. Open-source GPAI models (where parameters, architecture, and usage information are publicly available) are exempt from the technical documentation requirements (Annex XI) and downstream provider information requirements (Annex XII). However, they must still comply with copyright law, publish a training data summary, and — critically — if they exceed the systemic risk threshold, all obligations apply in full, including the additional Tier 2 requirements. The exemption does not extend to systemic-risk models.

What are codes of practice for GPAI models?

Codes of practice are collaborative, AI Office-facilitated documents that operationalise GPAI obligations into specific compliance measures. Developed with input from GPAI providers, downstream providers, civil society, and academia, they provide a practical compliance pathway. Adherence to a code of practice creates a legal presumption of conformity with the corresponding GPAI obligations — similar to how harmonised standards work in EU product-safety law. They serve as the primary compliance mechanism until formal harmonised standards are published.