GPAI Models under Art. 51–56 EU AI Act

What are General Purpose AI (GPAI) models under the EU AI Act? Overview of definition, provider obligations, systemic risk, FLOPs threshold and practical implications for companies using LLMs.

11 February 20265 min read
EU AI ActGPAIGeneral Purpose AILLMSystemic RiskCompliance

Overview

With its rules on General Purpose AI (GPAI), the EU AI Act responds to a new generation of powerful foundation models -- particularly large language models (LLMs), multimodal models and foundation models.

These models are not developed for a single purpose but can be deployed for a wide range of applications. This is precisely where their regulatory risk lies: they form the basis for numerous downstream systems, including high-risk AI.

This article explains:

  • What a GPAI model is
  • When additional obligations apply
  • What "systemic risk" means
  • What requirements providers must fulfil
  • What implications arise for companies using LLMs via API

What Is a GPAI Model?

A GPAI model is an AI model that:

  • Was trained on a broad data basis
  • Is versatile in its applications
  • Is not limited to a single specific task
  • Can be integrated in different contexts

Typical examples:

  • Large language models (LLMs)
  • Multimodal foundation models
  • Image generation models
  • Code models

Distinction

Not every AI application is a GPAI model. A specialised credit scoring model is not GPAI -- a universally deployable language model, however, is.

Two Regulatory Tiers for GPAI

The AI Act distinguishes between:

CategoryCriteriaObligations
Standard GPAIUniversally deployableDocumentation, transparency
GPAI with systemic riskComputing power > 10^25 FLOPs or equivalent thresholdExtended safety & evaluation obligations

GPAI with Systemic Risk

A model is considered systemically risky if it:

  • Is exceptionally capable
  • Can have broad societal impact
  • Carries significant security risks

The FLOPs threshold (>10^25) serves as a technical indicator of training complexity.

Technical Threshold

The FLOPs limit is a reference value. Models below this threshold can also be classified as systemic if they possess comparable capabilities.

Obligations for Providers of Standard GPAI

Providers must:

  1. Create technical documentation
  2. Publish a summary of training data
  3. Implement copyright policies
  4. Support downstream providers

Technical documentation includes:

  • Model architecture
  • Training methodology
  • Evaluation procedures
  • Performance metrics
  • Known risks

Additional Obligations for Systemic Risk

Providers of systemic GPAI models must additionally:

  1. Conduct model evaluations
  2. Implement adversarial testing
  3. Strengthen cybersecurity measures
  4. Establish incident reporting
  5. Document risk analyses

The aim is to minimise misuse, disinformation or security risks.

Heightened Duty of Care

Systemic GPAI providers are subject to a particularly stringent regulatory oversight regime.

Relationship to High-Risk Systems

A GPAI model itself is not automatically a high-risk system.

However: If a GPAI model is integrated into a high-risk AI system, the high-risk obligations additionally apply to the overall system.

Example: A language model is integrated into a recruiting system -- the overall system can be high-risk, even if the base model is GPAI.

What Does This Mean for API Users?

Companies using an LLM via API:

  • Are generally not providers of the GPAI model
  • But can become providers or deployers of a high-risk system
  • Bear their own documentation and transparency obligations

Important for Companies

Using an external LLM does not exempt you from your own compliance obligations.

Common Misconceptions

AssumptionReality
"The LLM provider is responsible"Only for the base model
"API usage = no obligations"Deployer obligations remain
"Open source is unregulated"Commercial provision can trigger provider obligations

Codes of Practice

Voluntary codes of conduct are envisaged for GPAI.

These can:

  • Create standardisation
  • Simplify compliance
  • Increase market confidence

They do not, however, replace legal obligations.

Practical Implementation for Companies

Step 1 -- Role Analysis

  • Are we only using an external model?
  • Are we modifying it?
  • Are we fine-tuning it further?

Step 2 -- Documentation Review

  • Have we documented the intended purpose?
  • Is it clear how the model is integrated?
  • Have risks been assessed?

Step 3 -- Downstream Risk Analysis

  • In what context is the model deployed?
  • Does the overall system fall under Annex III?
  • Are there GDPR interfaces?

Connection to the GDPR

GPAI models can:

  • Contain personal data
  • Generate personal data
  • Use training data from web scraping

Relevant GDPR questions:

  • Legal basis for training
  • Data subject rights
  • Accuracy of outputs
  • Third-country transfers

Strategic Significance

GPAI regulation is politically highly sensitive.

Companies should:

  • Review technological dependencies
  • Implement documentation obligations early
  • Establish governance structures

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Something novel or high-risk?

Collaborate with Creativate AI Studio and a network of industry experts and deep-tech researchers to explore, prototype and validate advanced AI systems.

Next Steps

  1. Check whether you are a provider or user of a GPAI model.
  2. Document the integration and intended purpose.
  3. Assess downstream risks.
  4. Review potential high-risk classifications.
  5. Integrate GPAI governance into your compliance system.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles