Understanding the EU AI Act’s Risk Ladder: From Prohibited AI to GPAI Obligations

A clear visual and conceptual map of the EU AI Act’s four risk regimes — how they’re defined, when they apply, and what each means for AI providers and deployers.

2024

Content

The Risk Ladder at the Heart of the EU AI Act

The EU AI Act doesn’t regulate all AI equally. Instead, it introduces a “risk ladder” — a layered approach to AI governance designed to balance innovation with protection.
Each rung on this ladder corresponds to a level of potential harm an AI system might pose to individuals or society, triggering different legal obligations.

This model spans four main categories:

  1. Prohibited AI Practices

  2. High-Risk AI Systems

  3. AI Requiring Transparency Obligations

  4. General-Purpose AI (GPAI)

Understanding where your system sits on this ladder is the foundation of compliance.

  1. Prohibited AI Systems (Article 5)

At the top of the ladder — and completely off-limits — are AI systems deemed an unacceptable risk. These are practices that directly conflict with EU values or fundamental rights.

Examples of prohibited AI include:

  • AI used for social scoring by governments or authorities.

  • Manipulative or subliminal AI that distorts human behavior or decision-making.

  • Biometric categorization based on sensitive traits like political or religious beliefs.

  • Real-time remote biometric identification in public spaces (with narrow exceptions).

Design implication: If your AI system risks infringing on fundamental rights or human dignity, it must be re-engineered — these use cases are flatly banned within the EU.

  1. High-Risk AI Systems (Articles 6–9)

High-risk AI represents the Act’s most tightly regulated category. These systems are not banned, but they are subject to extensive compliance and oversight before being placed on the market.

High-risk status is triggered when an AI system is:

  • Part of a product regulated under EU product safety law (e.g., medical devices, machinery).

  • Listed in Annex III, including:

    • Recruitment and HR tools

    • Education and exams

    • Credit scoring

    • Border control

    • Law enforcement

    • Access to essential services

Key obligations for high-risk AI:

  • Risk management systems

  • Data governance and quality assurance

  • Transparency and record-keeping

  • Human oversight mechanisms

  • Accuracy, robustness, and cybersecurity controls

Design implication: Compliance starts before launch. Providers must implement conformity assessments and maintain post-market surveillance systems.

  1. Limited-Risk / Transparency Obligations (Article 50)

Some AI systems do not pose serious risk but can still influence user decisions or behavior.
These systems trigger transparency obligations, ensuring users are aware they are interacting with AI.

Examples:

  • Chatbots and virtual assistants.

  • AI-generated or manipulated media (e.g., deepfakes).

  • Emotion recognition or biometric categorization tools.

Transparency rules require:

  • Informing users they are interacting with AI.

  • Clear labeling of AI-generated content.

  • Disclosure when synthetic data or media is used.

Design implication: Integrate disclosure mechanisms (labels, user notices, explainability features) from the design phase.

  1. General-Purpose AI (GPAI) — Chapter V

The latest addition to the AI Act, GPAI covers foundation models and large-scale general-purpose systems that can be adapted across many applications — such as LLMs, image generators, or multimodal models.

These systems can amplify downstream risks across multiple sectors, so the Act introduces specific obligations for GPAI providers.

Two sub-categories:

  1. General GPAI — must comply with transparency, documentation, and data governance standards.

  2. GPAI with Systemic Risk — additional obligations, such as:

    • Independent testing and evaluation

    • Incident reporting

    • Robust cybersecurity and risk mitigation

    • Public technical documentation (“model cards”)

Design implication: GPAI developers must document training data, testing methodologies, and usage restrictions — and support downstream deployers in maintaining compliance.

Visual Map: The EU AI Act Risk Ladder

 PROHIBITED (Art. 5)
 
 Unacceptable risk (banned uses)
 
 ┣━━ HIGH-RISK (Arts. 6–9)
 Strict conformity, documentation, oversight
 
 ┣━━ TRANSPARENCY OBLIGATIONS (Art. 50)
 Disclosure and labeling duties
 
 ┗━━ GPAI (Ch. V)
         System-level transparency & systemic risk management

This “ladder” is cumulative: each step introduces different compliance thresholds and obligations, depending on how close your system sits to the top of the risk hierarchy.

Key Takeaway

The EU AI Act’s risk-based framework is not just a legal formality — it’s a practical compliance roadmap.
By mapping your AI systems to the correct risk level at the design stage, you can anticipate obligations, streamline audits, and future-proof your products for the evolving European regulatory landscape.

Get started

See where artificial intelligence helps—fast.

In one short call we identify decisions, KPIs, and data gaps; you receive a written Discovery Brief within 24 hours.

Get started

See where artificial intelligence helps—fast.

In one short call we identify decisions, KPIs, and data gaps; you receive a written Discovery Brief within 24 hours.

Get started

See where artificial intelligence helps—fast.

In one short call we identify decisions, KPIs, and data gaps; you receive a written Discovery Brief within 24 hours.