Is My System High-Risk? Understanding the EU AI Act’s Most Critical Classification

A plain-English walkthrough of what makes an AI system “high-risk” under the EU AI Act — with practical cues, Annex III mapping, and key exceptions.

2024

Content

Why “High-Risk” Matters

If your AI system is classified as high-risk under the EU AI Act, it triggers the most extensive set of obligations — covering design, documentation, testing, and ongoing monitoring.

Knowing whether your system falls into this category isn’t always straightforward.
The classification depends on where and how your AI is used, and whether it matches specific use cases listed in Annex III of the Act.

This guide helps you decide whether your system qualifies — and what to do if it does.

  1. What Does “High-Risk” Mean?

“High-risk” AI systems are those that pose a significant risk to health, safety, or fundamental rights of individuals when used as intended.

The EU AI Act recognizes two main routes into this category:

  1. Route A – Product Safety Connection
    AI systems that form part of regulated products under EU safety law (e.g. medical devices, toys, vehicles, aviation systems).
    These are automatically considered high-risk once they undergo conformity assessments under their sectoral legislation.

  2. Route B – Annex III Use Cases
    Standalone AI systems performing sensitive functions in specific areas like employment, education, law enforcement, or access to public services.

If your AI system falls into any Annex III use case — you’re in high-risk territory.

  1. The Annex III Use Case Map

Annex III lists eight categories of high-risk AI applications:

Category

Example Applications

1. Biometrics

Facial recognition, emotion analysis, biometric categorization

2. Critical Infrastructure

AI managing power grids, water supply, or traffic systems

3. Education & Training

Exam scoring, admissions decisions, student performance analysis

4. Employment

AI screening candidates, ranking CVs, or evaluating employee performance

5. Essential Private & Public Services

Credit scoring, insurance risk assessment, social benefits eligibility

6. Law Enforcement

Predictive policing, crime analytics, suspect identification

7. Migration & Border Control

Risk assessments, visa or asylum decision-making

8. Administration of Justice & Democracy

AI systems aiding judicial decisions or evidence analysis

Design takeaway: If your system influences opportunities, rights, or safety in these contexts — it’s almost certainly high-risk.

  1. When a “High-Risk” Label Might Not Apply

There are some exceptions and clarifications:

  • General-purpose AI models (like LLMs) are not automatically high-risk — unless used in an Annex III application.

  • Research prototypes and sandbox tests are exempt until placed on the market.

  • Human-in-the-loop systems can still be high-risk if human oversight is minimal or ineffective.

  • Open-source tools may fall under high-risk obligations if fine-tuned for Annex III purposes.

Rule of thumb: It’s not about what your model is — it’s about what it’s used for.

  1. Key Obligations for High-Risk Systems

Once a system qualifies as high-risk, Articles 8 and 9 outline a strict compliance regime:

Providers must:

  • Implement a risk management system throughout the lifecycle.

  • Ensure data governance and quality control.

  • Maintain technical documentation and logging capabilities.

  • Conduct conformity assessments before release.

  • Enable human oversight and explainability.

  • Guarantee accuracy, robustness, and cybersecurity.

  • Register the system in the EU AI database (managed by the European Commission).

Deployers must:

  • Use the AI system as intended by the Provider.

  • Monitor and report serious incidents.

  • Keep usage logs and ensure trained personnel handle AI operations.

  1. Design → Deployment Lifecycle

The high-risk framework applies from design to deployment:

Phase

What’s Required

Design

Risk classification, data validation, bias assessment

Testing

Conformity checks, documentation, pre-market review

Deployment

Human oversight, monitoring, incident reporting

Post-market

Continuous evaluation, updates, and audits

Best practice: Treat compliance as a living process — not a one-time checklist.

  1. Visual Cue: High-Risk Decision Flow


  1. Examples in Context

  • HR recruitment tool: Screening CVs → Annex III, high-risk.

  • AI traffic control system: Managing critical infrastructure → high-risk.

  • Chatbot providing general advice: Not high-risk (transparency obligations apply).

  • Foundation model fine-tuned for credit scoring: Becomes high-risk upon deployment.

  1. Why It Matters

High-risk classification isn’t meant to block innovation — it’s meant to build trust.
By requiring safety, oversight, and transparency, the EU aims to make AI adoption both responsible and sustainable.
For businesses, compliance becomes a competitive advantage: high-risk-ready systems will be the only ones allowed in sensitive markets.

Get started

See where artificial intelligence helps—fast.

In one short call we identify decisions, KPIs, and data gaps; you receive a written Discovery Brief within 24 hours.

Get started

See where artificial intelligence helps—fast.

In one short call we identify decisions, KPIs, and data gaps; you receive a written Discovery Brief within 24 hours.

Get started

See where artificial intelligence helps—fast.

In one short call we identify decisions, KPIs, and data gaps; you receive a written Discovery Brief within 24 hours.