Overview
The EU AI Act (Regulation (EU) 2024/1689) classifies AI systems into four risk classes. The higher the risk, the stricter the requirements. This risk-based system is the centrepiece of the regulation.
The Four Risk Classes
1. Unacceptable Risk (Prohibited)
AI systems considered a threat to people are fully prohibited:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with exceptions)
- Manipulation through subliminal techniques
- Exploitation of vulnerabilities of specific groups of persons
- Emotion recognition in the workplace and educational institutions
Prohibited Practices
Prohibited AI systems may neither be developed, deployed nor placed on the market. Violations can be penalised with fines of up to EUR 35 million or 7% of global annual turnover.
2. High Risk
AI systems with significant impact on fundamental rights or safety:
- Biometric identification and categorisation
- Critical infrastructure (energy, transport, water)
- Education and vocational training (access, assessment)
- Employment (recruitment, evaluation, dismissal)
- Essential services (credit scoring, insurance)
- Law enforcement and border control
- Justice and democratic processes
Requirements for high-risk AI:
- Risk management system
- Data governance and data quality
- Technical documentation
- Logging
- Transparency and information obligations
- Human oversight
- Accuracy, robustness, cybersecurity
Deploying a high-risk AI system?
Specialised expertise is crucial for the development and operation of high-risk AI systems. Work with Creativate AI Studio to meet requirements and build robust architectures.
3. Limited Risk (Transparency Obligations)
AI systems that interact with people must meet transparency requirements:
- Chatbots: Users must know they are interacting with AI
- Deepfakes: Must be labelled as AI-generated
- Emotion recognition: Affected persons must be informed
- AI-generated content: Must be labelled as such
4. Minimal Risk
Most AI systems fall into this category and are subject to no specific obligations:
- Spam filters
- AI in video games
- Recommendation systems (with limitations)
- Industrial applications without safety relevance
Timeline
| Milestone | Date |
|---|---|
| Entry into force | August 2024 |
| Prohibition of unacceptable AI | February 2025 |
| GPAI rules applicable | August 2025 |
| High-risk AI (Annex III) | August 2026 |
| Full application | August 2027 |
General Purpose AI (GPAI)
Separate rules apply to general-purpose AI models (such as GPT, Claude, Gemini). Providers must supply technical documentation and respect copyright. Models with systemic risk are subject to additional obligations.
Next Steps
- Conduct an inventory of all deployed AI systems
- Carry out risk classification for each system
- Perform a gap analysis between current status and requirements
- Create and prioritise an action plan
Need help with implementation?
Work with Creativate AI Studio to classify your AI systems, conduct a gap analysis and create a concrete action plan — technically sound and regulation-compliant.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.
Not sure where you stand?
If your AI use case does not clearly fit into a category, send us a brief description — we will point you in the right direction.