High-Risk AI Systems under Annex III EU AI Act

Which AI systems qualify as high-risk under the EU AI Act? Complete overview of Annex III areas, exceptions under Art. 6(3) and all compliance obligations for providers and deployers.

11 February 20265 min read
EU AI ActHigh-RiskAnnex IIIRisk ClassificationComplianceCE Marking

Overview

The risk-based approach is the centrepiece of the EU AI Act. While prohibited practices only cover a small, clearly defined area, the regulatory focus lies on so-called high-risk AI systems.

These systems are not prohibited -- but they are subject to extensive compliance requirements. For many companies, this is where the greatest regulatory risk lies, as high-risk classifications frequently come unexpectedly.

This article explains:

  • What Annex III specifically covers
  • When a system is not high-risk despite falling within scope
  • Which exceptions under Art. 6(3) apply
  • Which obligations must be fulfilled

What Does "High-Risk" Mean?

An AI system qualifies as high-risk if:

  1. It is deployed in one of the areas defined in Annex III or
  2. It is a safety component of a regulated product (e.g. a medical device)

Important

"High-risk" does not mean a system is dangerous. It means the legislator identifies significant potential for fundamental rights impacts.

The 8 Areas under Annex III

1. Biometric Identification

Examples:

  • Facial recognition
  • Voice recognition
  • Behavioural biometrics

Risk: Invasion of privacy, misuse of state surveillance.

2. Critical Infrastructure

Examples:

  • Power grid management
  • Traffic management systems
  • Water supply

Risk: System failures with significant consequences for society.

3. Education & Vocational Training

Examples:

  • AI-assisted exam grading
  • University admissions selection
  • Learning performance predictions

Risk: Unequal treatment, lasting disadvantages.

4. Employment & Human Resources

Examples:

  • CV screening
  • Video interview analysis
  • Performance evaluation
  • Promotion decisions

Risk: Discrimination, lack of transparency.

5. Access to Essential Services

Examples:

  • Credit scoring
  • Insurance risk assessment
  • Social benefits assessment

Risk: Financial exclusion, social disadvantage.

6. Law Enforcement

Examples:

  • Crime risk prediction
  • Evidence analysis
  • Suspect identification

Risk: Erroneous decisions with severe consequences.

7. Migration & Border Control

Examples:

  • Visa decisions
  • Asylum assessments
  • Entry risk analysis

8. Justice & Democratic Processes

Examples:

  • Decision support for judges
  • AI systems influencing elections

When Is a System NOT High-Risk Despite Falling Within Scope?

Art. 6(3) provides for exceptions.

A system is not high-risk if it:

  • is purely supportive
  • contains no autonomous decision logic
  • causes no significant impairment
  • merely provides preparatory analyses

Avoid Misconceptions

"Merely supportive" does not automatically qualify as an exception. The decisive factor is whether the system effectively exercises decision-making power.

Typical Borderline Cases

ScenarioHigh-Risk?Assessment
CV pre-screening with automatic rejectionYesEmployment (Annex III)
CV analysis with score only for HR supportPossiblyReview under Art. 6(3)
Credit risk indicator without automatic rejectionLikelyEssential service
Internal productivity analysis without HR consequencesProbably notNot an Annex III area

The 7 Core Compliance Requirements

Every high-risk system requires:

  1. Risk management system
  2. Data governance concept
  3. Technical documentation
  4. Automatic logging
  5. Transparency information
  6. Human oversight
  7. Accuracy, robustness, cybersecurity

Additionally:

  • Conformity assessment
  • CE marking
  • Registration in EU database

Risk Management in Detail

The risk management system must:

  • Identify risks
  • Assess risks
  • Mitigate risks
  • Continuously monitor

It is a living process, not a one-off document.

Technical Documentation

This must include, among other things:

  • System description
  • Intended purpose
  • Training data description
  • Testing procedures
  • Performance metrics
  • Human oversight mechanisms
  • Safety measures

Market Access at Risk

Without complete technical documentation, a high-risk system may not be placed on the market.

Human Oversight

Providers must ensure that:

  • Humans understand the system
  • Intervention is possible
  • Shutdown is possible
  • Erroneous decisions can be corrected

Deployer Obligations

Not only providers are affected.

Deployers must:

  • Use systems in accordance with their intended purpose
  • Ensure input data quality
  • Guarantee human oversight
  • Conduct a fundamental rights impact assessment for certain systems

Connection to the GDPR

Many high-risk systems involve personal data.

This means parallel application of:

  • GDPR (e.g. Art. 22 automated decisions)
  • Data Protection Impact Assessment
  • Transparency obligations

Practical Implementation

Step 1 -- System Delineation

  • What is the AI system?
  • What function does it perform?
  • Who is the provider, who is the deployer?

Step 2 -- Annex III Review

  • Does the area of deployment fall under one of the 8 areas?
  • Does an exception under Art. 6(3) apply?

Step 3 -- Gap Analysis

  • Is documentation available?
  • Is risk management established?
  • Is logging implemented?

Step 4 -- Compliance Roadmap

  • Budget planning
  • Responsibilities
  • Timeline

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Next Steps

  1. Conduct a structured risk classification.
  2. Document the purpose and functionality of your systems.
  3. Carefully review exceptions under Art. 6(3).
  4. Develop a formal risk management system.
  5. Plan conformity assessment and CE marking.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles