Transition Periods & Timeline of the EU AI Act

Detailed overview of entry into force, transition periods and application dates of the EU AI Act – including concrete action recommendations for businesses.

11 February 20265 min read
EU AI ActTimelineTransition PeriodsHigh-Risk AIGPAICompliance

Overview

The EU AI Act is the world's first comprehensive regulation for Artificial Intelligence. Unlike the GDPR, however, it does not enter full application all at once but follows a staggered timeline with clearly defined transition periods.

For companies, this means: The regulation is already taking effect -- even though individual obligations will not become fully applicable until 2025, 2026 or 2027. Anyone who develops, integrates or deploys AI systems should know the timeline precisely to minimise risks and build compliance in a structured manner.

This article provides a complete overview of:

  • Entry into force
  • Application dates for individual chapters
  • Deadlines for GPAI models
  • Deadlines for high-risk systems
  • Action steps for companies

Entry into Force vs. Applicability -- The Difference

First, it is important to clearly distinguish two terms:

TermMeaning
Entry into forceThe legal act is formally valid
ApplicabilityThe specific obligations must be fulfilled

The EU AI Act entered into force in August 2024. Most substantive obligations, however, apply only progressively.

Important to Understand

The EU AI Act already applies as binding EU law. Companies cannot claim they "don't need to do anything yet" simply because individual obligations become applicable later.

Complete Timeline of the EU AI Act

Overview of Key Milestones

DateMilestoneContent
August 2024Entry into forcePublication in the Official Journal + 20 days
February 2025Prohibited practices applicableArt. 5 AI Act
August 2025GPAI obligations + governance structuresArt. 51--56
August 2026High-risk systems (Annex III)Core obligations for providers & deployers
August 2027Full applicationIncl. high-risk as safety component

Phase 1 -- February 2025: Prohibited AI Practices

From February 2025, the prohibitions under Art. 5 EU AI Act are binding.

These include, among others:

  • Social scoring by public authorities
  • Manipulative AI systems
  • Exploitation of vulnerable persons
  • Real-time remote biometric identification in public spaces (with narrow exceptions)
  • Emotion recognition in the workplace or educational institutions

Fine Risk

Violations of Art. 5 can be sanctioned with up to EUR 35 million or 7% of global annual turnover.

What Companies Should Do by Then:

  1. Take stock of all deployed AI systems
  2. Review for possible prohibited practices
  3. Document the intended purpose
  4. Adapt or shut down problematic functions

Phase 2 -- August 2025: GPAI Models & Governance

From August 2025, special rules apply for:

  • Providers of General Purpose AI (GPAI)
  • Operators of systemic GPAI models

Two Tiers for GPAI:

CategoryCriteriaObligations
Standard GPAIGeneral AI modelsTechnical documentation, training data summary
GPAI with systemic risk>10^25 FLOPsAdversarial testing, incident reporting, cybersecurity measures

Important: Companies using an API-based LLM (e.g. via cloud providers) are generally not providers of a GPAI model -- but may become deployers of a high-risk system.

Avoid the Misconception

API usage does not automatically make you a provider -- but functional changes or re-branding can shift this role.

Phase 3 -- August 2026: High-Risk AI Systems (Annex III)

From August 2026, the comprehensive compliance requirements for high-risk systems under Annex III apply.

Affected areas include:

  • Biometric identification
  • Critical infrastructure
  • Education
  • Employment
  • Credit scoring
  • Law enforcement
  • Migration
  • Justice

Core Provider Obligations

  1. Risk management system
  2. Data governance
  3. Technical documentation
  4. Logging
  5. Transparency obligations
  6. Human oversight
  7. Accuracy & robustness
  8. Declaration of Conformity + CE marking

Use the Preparation Time

High-risk compliance often requires 12--18 months of lead time. Companies should start gap analyses at the latest now.

Phase 4 -- August 2027: Full Application

From August 2027, the AI Act applies in full, including:

  • High-risk AI as a safety component of products
  • Full market surveillance
  • EU-wide database obligations
  • Post-market monitoring

Practical Implementation -- What Companies Should Do NOW

Step 1 -- AI Inventory

  • What systems do we use?
  • In-house development or third-party provider?
  • In which business processes?

Step 2 -- Initial Risk Classification

  • Prohibited practices?
  • High-risk under Annex III?
  • GPAI relevance?

Step 3 -- Gap Analysis

  • Is technical documentation available?
  • Is logging implemented?
  • Is human oversight defined?
  • Is training data documented?

Step 4 -- Governance Structure

  • Has a responsible person been appointed?
  • Is an AI compliance process defined?
  • Is the data protection / AI governance interface clarified?

Timeline as a Decision Basis

Many companies underestimate the organisational effort:

MeasureAverage Implementation Time
AI inventory1--3 months
Risk classification1--2 months
Technical documentation3--6 months
Quality management system6--12 months
Conformity assessment3--6 months

The regulatory clock is already ticking.

Connection to the GDPR

The AI Act does not replace the GDPR.

Both apply in parallel.

Typical interfaces:

  • Data Protection Impact Assessment (DPIA)
  • Transparency obligations
  • Purpose limitation
  • Data minimisation
  • Data subject rights

Companies therefore need integrated governance structures.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Something novel or high-risk?

Collaborate with Creativate AI Studio and a network of industry experts and deep-tech researchers to explore, prototype and validate advanced AI systems.

Next Steps

  1. Conduct a structured AI inventory.
  2. Classify your systems according to AI Act risk classes.
  3. Identify regulatory gaps.
  4. Develop an internal AI governance roadmap.
  5. Plan budget and resources for 2025--2027.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles