Prohibited AI Practices under Art. 5 EU AI Act

Which AI practices are fully prohibited under Art. 5 EU AI Act? Detailed overview of all banned systems, exceptions, distinctions and penalty risks.

11 February 20265 min read
EU AI ActProhibited AIArt. 5Social ScoringBiometricsCompliance

Overview

The EU AI Act follows a risk-based approach. While many AI systems are regulated or classified as high-risk, there is a small, clearly defined category of fully prohibited AI practices.

These practices are deemed incompatible with the fundamental rights of the European Union. They cannot be authorised -- not even with additional safeguards.

Since February 2025, these prohibitions are directly applicable.

This article explains:

  • All prohibited practices under Art. 5
  • Political and fundamental rights background
  • Exceptions (e.g. law enforcement)
  • Distinction from permitted systems
  • Penalty risks

Core Principle: Protection of Fundamental Rights

Art. 5 prohibits AI systems that:

  • Manipulate people
  • Systematically discriminate
  • Massively impair fundamental rights
  • Lead to societal surveillance

Absolute Prohibitions

Unlike high-risk AI, these systems cannot be regulated -- they are fully banned.

Manipulative AI Systems

Systems are prohibited that:

  • Use subliminal techniques
  • Substantially influence the behaviour of persons
  • Undermine freedom of choice
  • Cause or are likely to cause harm

Example: AI that exploits psychological weaknesses to manipulate purchasing decisions.

Exploitation of Vulnerable Persons

Systems are prohibited that target and manipulate or exploit:

  • Children
  • Elderly persons
  • Persons with disabilities

Example: AI-based game mechanics that deliberately push minors towards in-app purchases.

Social Scoring by Public Authorities

Prohibited is:

  • Assessment of citizens by public authorities
  • Based on social behaviour or personality traits
  • With negative societal consequences

This is intended to prevent systems reminiscent of authoritarian surveillance models.

Real-Time Remote Biometric Identification in Public Spaces

Generally prohibited is:

  • Real-time facial recognition in public spaces

Exceptions exist only under strict conditions for:

  • Counter-terrorism
  • Search for serious offenders
  • Acute threat prevention

These exceptions are subject to strict authorisation procedures.

Emotion Recognition in the Workplace or Educational Institutions

Prohibited is the use of AI for:

  • Analysis of emotional states
  • Assessment of employees
  • Performance analysis of pupils or students

Rationale: High error susceptibility and significant discrimination risks.

Predictive Policing Based on Personal Data

Prohibited are systems that:

  • Predict offences
  • Rely solely on profiling
  • Create individual risk assessments

The aim is to prevent discriminatory prediction models.

Untargeted Facial Image Databases

Prohibited is:

  • Mass collection of facial images
  • Scraping from the internet
  • Building large-scale biometric databases

Biometric Categorisation by Sensitive Characteristics

Prohibited is the categorisation of persons by:

  • Ethnicity
  • Religion
  • Sexual orientation
  • Political beliefs

Such systems are considered particularly threatening to fundamental rights.

Summary Table

Prohibited PracticePurpose of the Prohibition
Manipulative AIProtection of freedom of choice
Exploitation of vulnerable personsProtection of vulnerable groups
Social scoringPrevention of state surveillance
Real-time biometricsProtection of privacy
Emotion recognition (workplace/education)Protection against misinterpretation
Predictive policingPrevention of discrimination
Facial image databasesProtection against mass surveillance
Biometric categorisationProtection against discrimination

Distinction from High-Risk AI

Not every biometric application is prohibited.

Example:

  • Biometric authentication in a private setting -- permitted
  • Real-time surveillance in public spaces -- prohibited

The decisive factors are:

  • Context of deployment
  • Purpose
  • Target group
  • Intensity of the intrusion

Context Is Decisive

An identical technical system can be permitted or prohibited -- depending on the use case.

Fines under Art. 99

Violations of Art. 5 can be sanctioned with:

  • Up to EUR 35 million or
  • Up to 7% of global annual turnover

This is the highest fine tier in the AI Act.

Common Misconceptions

AssumptionReality
"Emotion recognition is generally prohibited"Only in workplace and educational settings
"Biometrics are fundamentally banned"Only certain applications
"The private sector is not affected"Companies must also observe the prohibitions

Connection to the GDPR

Many prohibited practices overlap with:

  • Art. 9 GDPR (special categories)
  • Art. 22 GDPR (automated decisions)
  • Principles under Art. 5 GDPR

Parallel review is required.

Practical Implementation

Step 1 -- AI Inventory

  • What biometric functions exist?
  • Are emotions being analysed?
  • Is profiling with sensitive data taking place?

Step 2 -- Purpose Review

  • Is the system deployed in public spaces?
  • Does it affect vulnerable groups?

Step 3 -- Documentation

  • Proof of lawfulness
  • Purpose definition
  • Risk analysis

Step 4 -- Shutdown or Adaptation

If a prohibition applies:

  • Deactivate the function
  • Adapt the system architecture
  • Review alternative solutions

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Next Steps

  1. Review your systems for Art. 5 offences.
  2. Document the deployment context and intended purpose.
  3. Review biometric and emotional analysis functions.
  4. Assess discrimination risks.
  5. Obtain legal advice if in doubt.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles