Art. 9 GDPR – Special Categories of Personal Data

Which special categories of personal data are particularly protected under Art. 9 GDPR? Overview of the processing prohibition, exceptions and AI-specific risks such as proxy discrimination and sensitive inferences.

11 February 20264 min read
GDPRArt. 9Special CategoriesHealth DataBiometricsAI Compliance

Overview

Art. 9 GDPR protects so-called special categories of personal data. The processing of these data is in principle prohibited -- unless one of the narrowly defined exceptions applies.

In the context of AI systems, Art. 9 is particularly relevant because modern models frequently process, infer or unintentionally reconstruct sensitive information.

This article explains:

  • Which data fall under Art. 9
  • Which exceptions are permissible
  • Why AI systems pose particular risks here
  • The difference between pseudonymisation and anonymisation
  • Practical implementation steps

Which Data Are "Special Categories"?

Art. 9(1) GDPR lists:

  • Data revealing racial or ethnic origin
  • Political opinions
  • Religious or philosophical beliefs
  • Trade union membership
  • Genetic data
  • Biometric data for the purpose of uniquely identifying a natural person
  • Health data
  • Data concerning sex life or sexual orientation

Principle

The processing of these data is in principle prohibited.

What Does "In Principle Prohibited" Mean?

Unlike Art. 6, the following applies here:

  • Art. 6 alone is not sufficient
  • Additionally, an exception under Art. 9(2) must apply

Without an exception, the processing is unlawful.

The Most Important Exceptions under Art. 9(2)

There are ten exception provisions. The most practically relevant are:

ExceptionTypical Context
a) Explicit consentHealth apps
b) Employment law obligationsHR systems
g) Substantial public interestPublic authorities
h) Healthcare provisionMedical AI
j) Scientific researchAI research

Each exception is to be interpreted narrowly.

AI-Specific Challenges

Unintended Inference of Sensitive Data

AI systems can:

  • infer sensitive characteristics from seemingly neutral data
  • e.g. political orientation from social media patterns
  • health conditions from behavioural data

These indirect inferences can also fall under Art. 9.

Inference Problem

Even if sensitive data are not directly collected, a model-based inference can trigger Art. 9.

Proxy Discrimination

A system does not use sensitive data directly, but:

  • Postcode -- indirect ethnic attribution
  • Purchasing behaviour -- religious inferences
  • Language patterns -- assumptions about origin

This can effectively lead to discrimination, even without explicit capture of sensitive characteristics.

Biometric Data

Particularly relevant in the AI context:

  • Facial recognition
  • Fingerprint analysis
  • Voice recognition

Not every biometric processing falls under Art. 9 -- only when it serves the purpose of unique identification.

Pseudonymisation vs. Anonymisation

FeaturePseudonymisationAnonymisation
Re-identificationPossible with additional knowledgeNot possible
GDPR applicableYesNo
Suitable for AI trainingYesLimited

Many AI projects work with pseudonymised data -- these remain GDPR-relevant, however.

Misconception

Pseudonymised data are not anonymous data.

Health Data in the AI Context

Health data include:

  • Diagnoses
  • Laboratory values
  • Medication plans
  • Genetic information

Medical AI systems are therefore subject to particular regulation -- also in conjunction with the EU AI Act (high-risk category).

Relationship to Art. 22 GDPR

Automated decisions involving sensitive data:

  • are particularly high-risk
  • may require additional protective measures
  • frequently trigger a Data Protection Impact Assessment

Practical Implementation

Step 1 -- Data Classification

  • Do training data contain sensitive categories?
  • Can models infer such data?

Step 2 -- Assess Exceptions

  • Does Art. 9(2) apply?
  • Is explicit consent available?
  • Is there a legal basis?

Step 3 -- Implement Protective Measures

  • Access restrictions
  • Encryption
  • Minimisation
  • Bias testing

Step 4 -- Documentation

  • Document the exception basis
  • Conduct risk analysis
  • Prepare a DPIA where applicable

Connection to the EU AI Act

Several prohibited practices under Art. 5 AI Act concern:

  • Biometric categorisation
  • Sensitive profiling

High-risk AI systems in healthcare are also subject to parallel obligations.

Common Misconceptions

AssumptionReality
"We do not store sensitive data"Models can infer them
"Pseudonymisation is sufficient"GDPR remains applicable
"Only the healthcare sector is affected"HR or marketing AI can also trigger Art. 9

Governance Recommendation

For AI projects, the following should be assessed as standard:

  • Can sensitive categories be affected directly or indirectly?
  • Are bias tests implemented?
  • Do discrimination risks exist?

A structured risk approach significantly reduces liability risks.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Next Steps

  1. Classify your training and inference data.
  2. Assess possible indirect inferences of sensitive characteristics.
  3. Evaluate whether an exception under Art. 9(2) applies.
  4. Implement technical protective measures.
  5. Conduct a DPIA where there is increased risk.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles