Overview
The EU AI Act is based on a risk-based approach. Every AI system must be assigned to a risk category:
- Prohibited AI practices
- High-risk systems
- Transparency-obligated systems (limited risk)
- Minimal risk
This template serves as a structured working aid for classifying a specific AI system.
Important Note
Risk classification is not a one-off formality. It must be documented, reviewed regularly and updated when systems change.
Risk Classification Template
1. System Description
1.1 Name of the AI System
- System designation:
- Version:
- Provider:
- Deployer:
1.2 Intended Purpose
- What specific purpose does the system serve?
- Is it deployed in a sensitive area (HR, finance, healthcare, education)?
- Does it influence decisions about individuals?
1.3 Functional Description
- Type of AI (e.g. classification, scoring, generative AI)
- Degree of automation (supporting / decision-making)
- Integrated into a product or standalone?
2. Step 1: Prohibited Practices (Art. 5)
Check whether the system involves any of the following practices:
| Check Question | Yes / No |
|---|---|
| Manipulative AI that subconsciously influences behaviour? | |
| Exploitation of vulnerable persons (e.g. children)? | |
| Social scoring by public authorities? | |
| Real-time remote biometric identification in public spaces? | |
| Emotion recognition in the workplace or educational institutions? | |
| Biometric categorisation of sensitive characteristics? |
If YES
If at least one question is answered with "Yes", the system is fundamentally prohibited.
3. Step 2: High-Risk Systems (Annex III)
Check whether the system falls within one of the following areas:
| Area | Relevant (Yes/No) |
|---|---|
| Biometric identification | |
| Critical infrastructure | |
| Education and vocational training | |
| Employment and HR | |
| Access to essential services (e.g. credit) | |
| Law enforcement | |
| Migration and border control | |
| Justice and democratic processes |
If "Yes", review Art. 6(3) (exception test):
- Does the system have only minor impacts?
- Does it not influence any significant decisions?
If no exception applies, the system is high-risk.
Documentation Obligation
The justification for a high-risk or non-high-risk classification must be documented.
4. Step 3: Safety Component
Is the AI system:
- Part of a product with CE marking?
- Safety-relevant for a regulated product?
If yes, a high-risk classification can also apply independently of Annex III.
5. Step 4: GPAI (General Purpose AI)
Is it:
- A general-purpose foundation model?
- A model with systemic risk?
If yes, additional obligations under Art. 51-56 apply.
6. Step 5: Transparency Obligations (Limited Risk)
Check:
| Check Question | Yes/No |
|---|---|
| Does the system interact directly with humans? | |
| Does it generate synthetic content (text, image, audio)? | |
| Is deepfake technology used? |
If "Yes", transparency and labelling obligations are relevant.
7. Result of Risk Classification
| Category | Result |
|---|---|
| Prohibited | |
| High-risk | |
| Limited risk | |
| Minimal risk |
Justification:
8. Follow-Up Measures by Risk Level
Prohibited Systems
- Cease development or deployment
- Obtain legal advice
High-Risk Systems
- Implement risk management system
- Document data governance
- Create technical documentation
- Carry out conformity assessment
- Review CE marking
- Establish post-market monitoring
Limited Risk
- Fulfil transparency obligations
- Label AI interactions
- Integrate deepfake notices
Minimal Risk
- General governance review
- Documentation for accountability purposes
9. Documentation Section
- Date of classification:
- Responsible person:
- Internal review by:
- Last updated:
Connection to the GDPR
Regardless of the AI Act risk level, the GDPR remains applicable in parallel.
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.
Not sure where you stand?
If your AI use case does not clearly fit into a category, send us a brief description — we will point you in the right direction.
Next Steps
- Complete this template for each AI system.
- Document the classification justification traceably.
- For high-risk systems, review conformity requirements early.
- Integrate risk classification into your AI governance.
- Validate your classification with qualified experts if necessary.
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.