EU AI Act Risk Classification – Template & Decision Tree

Step-by-step template for classifying an AI system under the EU AI Act – including prohibited practices, high-risk assessment (Annex III), safety component and documentation requirements.

11 February 20264 min read
EU AI ActRisk ClassificationHigh-RiskProhibited PracticesTemplateCompliance

Overview

The EU AI Act is based on a risk-based approach. Every AI system must be assigned to a risk category:

  1. Prohibited AI practices
  2. High-risk systems
  3. Transparency-obligated systems (limited risk)
  4. Minimal risk

This template serves as a structured working aid for classifying a specific AI system.

Important Note

Risk classification is not a one-off formality. It must be documented, reviewed regularly and updated when systems change.

Risk Classification Template

1. System Description

1.1 Name of the AI System

  • System designation:
  • Version:
  • Provider:
  • Deployer:

1.2 Intended Purpose

  • What specific purpose does the system serve?
  • Is it deployed in a sensitive area (HR, finance, healthcare, education)?
  • Does it influence decisions about individuals?

1.3 Functional Description

  • Type of AI (e.g. classification, scoring, generative AI)
  • Degree of automation (supporting / decision-making)
  • Integrated into a product or standalone?

2. Step 1: Prohibited Practices (Art. 5)

Check whether the system involves any of the following practices:

Check QuestionYes / No
Manipulative AI that subconsciously influences behaviour?
Exploitation of vulnerable persons (e.g. children)?
Social scoring by public authorities?
Real-time remote biometric identification in public spaces?
Emotion recognition in the workplace or educational institutions?
Biometric categorisation of sensitive characteristics?

If YES

If at least one question is answered with "Yes", the system is fundamentally prohibited.

3. Step 2: High-Risk Systems (Annex III)

Check whether the system falls within one of the following areas:

AreaRelevant (Yes/No)
Biometric identification
Critical infrastructure
Education and vocational training
Employment and HR
Access to essential services (e.g. credit)
Law enforcement
Migration and border control
Justice and democratic processes

If "Yes", review Art. 6(3) (exception test):

  • Does the system have only minor impacts?
  • Does it not influence any significant decisions?

If no exception applies, the system is high-risk.

Documentation Obligation

The justification for a high-risk or non-high-risk classification must be documented.

4. Step 3: Safety Component

Is the AI system:

  • Part of a product with CE marking?
  • Safety-relevant for a regulated product?

If yes, a high-risk classification can also apply independently of Annex III.

5. Step 4: GPAI (General Purpose AI)

Is it:

  • A general-purpose foundation model?
  • A model with systemic risk?

If yes, additional obligations under Art. 51-56 apply.

6. Step 5: Transparency Obligations (Limited Risk)

Check:

Check QuestionYes/No
Does the system interact directly with humans?
Does it generate synthetic content (text, image, audio)?
Is deepfake technology used?

If "Yes", transparency and labelling obligations are relevant.

7. Result of Risk Classification

CategoryResult
Prohibited
High-risk
Limited risk
Minimal risk

Justification:

8. Follow-Up Measures by Risk Level

Prohibited Systems

  • Cease development or deployment
  • Obtain legal advice

High-Risk Systems

  • Implement risk management system
  • Document data governance
  • Create technical documentation
  • Carry out conformity assessment
  • Review CE marking
  • Establish post-market monitoring

Limited Risk

  • Fulfil transparency obligations
  • Label AI interactions
  • Integrate deepfake notices

Minimal Risk

  • General governance review
  • Documentation for accountability purposes

9. Documentation Section

  • Date of classification:
  • Responsible person:
  • Internal review by:
  • Last updated:

Connection to the GDPR

Regardless of the AI Act risk level, the GDPR remains applicable in parallel.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Not sure where you stand?

If your AI use case does not clearly fit into a category, send us a brief description — we will point you in the right direction.

Next Steps

  1. Complete this template for each AI system.
  2. Document the classification justification traceably.
  3. For high-risk systems, review conformity requirements early.
  4. Integrate risk classification into your AI governance.
  5. Validate your classification with qualified experts if necessary.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles