The EU AI Act’s “Don’t Do This” List: Prohibited AI Practices Explained
A quick, plain-English guide to the AI systems that are completely banned under the EU AI Act — and the few narrow exceptions that exist.
The “Don’t Do This” List
The EU AI Act draws a firm line between regulated AI and forbidden AI.
While most AI systems can operate under specific conditions or risk levels, a handful of practices are flat-out banned because they’re considered an “unacceptable risk” to safety, rights, and democracy.
Article 5 of the Act defines these practices precisely.
Here’s what sits on the EU’s “Do Not Build” list — and why.
Manipulative or Deceptive AI
Any AI system that distorts human behavior or decision-making in a way that could cause physical or psychological harm is banned.
This includes AI designed to exploit vulnerabilities — such as age, disability, or emotional state — to push users toward specific actions or choices.
Examples:
AI toys that use emotional manipulation to influence children’s behavior.
Voice assistants that pressure users into purchases using deceptive cues.
✅ Design takeaway: If your AI subtly manipulates, nudges, or coerces users in ways they can’t reasonably detect, it’s crossing the line.
Social Scoring by Governments
Systems that score or rank individuals based on social behavior, predicted personality traits, or compliance with norms are banned outright.
Why?
Because such systems risk discrimination, stigmatization, and abuse of power.
The Act explicitly forbids social scoring by public authorities or entities acting on their behalf.
Examples:
Government scoring systems that assign “trustworthiness” or “citizenship grades.”
AI-driven penalties or benefits tied to social behavior, political expression, or personal beliefs.
✅ Design takeaway: No scoring, rating, or ranking of people for moral, behavioral, or social worthiness.
Untargeted Biometric Scraping
The AI Act bans indiscriminate scraping of facial or biometric data from online sources or CCTV feeds for building recognition databases.
Why?
Because it violates privacy and dignity by collecting personal data without consent or legal basis.
Examples:
Building facial-recognition databases by scraping images from the web or social media.
Using mass data collection to train biometric identification models.
✅ Design takeaway: Collect biometric data only with clear consent, purpose limitation, and legal authorization.
Real-Time Remote Biometric Identification in Public Spaces
Perhaps the most controversial prohibition: real-time facial recognition in public places is banned — except for very narrow law enforcement exceptions.
The rule:
Deploying real-time remote biometric identification (RBI) systems in publicly accessible spaces is not allowed, except where strictly necessary and legally authorized for:
Searching for specific missing persons or crime victims.
Preventing imminent terrorist threats.
Detecting or investigating serious crimes, under judicial authorization.
Even in these exceptional cases, such systems must be proportionate, time-limited, and subject to oversight.
✅ Design takeaway: Unless you’re operating under specific judicial authorization for law enforcement, no real-time biometric surveillance in public.
Predictive Policing & Emotion Recognition in Sensitive Contexts (Under Review)
While not fully banned yet, predictive policing and emotion recognition in workplaces or schools are under intense scrutiny and may face stricter limits as the Act is implemented and standards evolve.
Visual Summary: Prohibited AI Practices
Why It Matters
These prohibitions form the moral backbone of the EU AI Act.
They’re not about red tape — they’re about trust.
By outlawing manipulative or surveillance-heavy AI, the EU aims to ensure AI remains a tool for empowerment, not control.
For developers, this section is the simplest part of the law: if your system could fit any of these descriptions, don’t deploy it.

