Art. 35 GDPR – Data Protection Impact Assessment (DPIA) for AI Systems

When is a Data Protection Impact Assessment required under Art. 35 GDPR for AI systems? Structure, process, AI-specific risks and connection to the EU AI Act.

11 February 20264 min read
GDPRArt. 35DPIAData Protection Impact AssessmentProfilingAI Compliance

Overview

The Data Protection Impact Assessment (DPIA) is one of the central risk instruments of the GDPR. It becomes necessary when processing is likely to result in a high risk to the rights and freedoms of natural persons.

In the context of AI systems, the DPIA is particularly relevant. Profiling, automated decisions, large-scale data processing or use of sensitive data frequently trigger a mandatory assessment.

This article explains:

  • When a DPIA is mandatory
  • The structured 4-phase process
  • AI-specific risks
  • Connection to the EU AI Act
  • A practical example for an AI chatbot

When Is a DPIA Required?

Under Art. 35 GDPR, a DPIA is required when:

  • new technologies are deployed
  • a systematic and extensive evaluation of personal aspects takes place
  • automated decisions with significant effects are made
  • sensitive data are processed on a large scale
  • systematic monitoring takes place

Many supervisory authorities have published positive lists that specify typical DPIA cases.

AI = Increased Assessment Obligation

The use of AI is regularly considered a "new technology" and should always be assessed for DPIA requirements.

Typical AI Cases Requiring a DPIA

ScenarioDPIA Likely?
Credit scoringYes
Recruiting AIYes
Medical diagnostic AIYes
Internal productivity analysisCase-by-case assessment
Marketing chatbotDepends on scope

The 4-Phase DPIA Process

Art. 35(7) GDPR describes four core elements:

Systematic Description of the Processing

  • Purpose
  • Process
  • Data categories
  • Affected groups of persons
  • Recipients
  • Storage period

Additionally in the AI context:

  • Model type
  • Training data
  • Inference process
  • Degree of automation

Assessment of Necessity and Proportionality

The following must be assessed:

  • Is the processing necessary?
  • Are there less intrusive means?
  • Is the purpose legitimate?

Here, the purpose limitation from Art. 5 is made concrete.

Risk Assessment

The following are assessed:

  • Likelihood of occurrence
  • Severity of potential harm
  • Impact on fundamental rights

AI-specific risks:

  • Bias and discrimination
  • Erroneous decisions
  • Lack of transparency
  • Loss of control
  • Model manipulation

Bias Risks

Discrimination potential is a central assessment criterion for AI systems.

Remedial Measures

Measures may include:

  • Data minimisation
  • Pseudonymisation
  • Human review
  • Transparency measures
  • Logging
  • Access restrictions

The objective is to reduce the risk to an acceptable level.

Example: DPIA for an AI Chatbot

System Description

  • Chatbot for answering customer enquiries
  • Processing of customer data
  • Deployment of a language model

Risks

  • Incorrect information
  • Disclosure of sensitive data
  • Unclear training data basis

Measures

  • Logging
  • Content filters
  • Human escalation capability
  • Limited data storage

Connection to the EU AI Act

The DPIA overlaps with:

  • Risk management obligations
  • Fundamental Rights Impact Assessment (FRIA)
  • Documentation requirements

While the DPIA focuses on data protection, the AI Act addresses broader fundamental rights risks.

Both assessments should be conducted in an integrated manner.

When Must the Supervisory Authority Be Consulted?

If despite protective measures:

  • a high residual risk remains

the competent data protection supervisory authority must be consulted pursuant to Art. 36 GDPR.

Documentation Requirements

The DPIA must be:

  • documented in writing
  • reviewed regularly
  • updated when the system changes

Common Errors

ErrorRisk
No DPIA for high-risk AIFine
Generic risk analysisInsufficient assessment
No update when model changesViolation of accountability
Failure to involve Data Protection OfficerOrganisational deficiency

Practical Implementation

Step 1 -- Screening

  • Does the project fall under typical DPIA criteria?

Step 2 -- Interdisciplinary Team

  • Legal
  • IT
  • Compliance
  • Business unit

Step 3 -- Structured Documentation

  • Use a standardised template
  • Create a risk matrix

Step 4 -- Regular Updates

  • On model update
  • On change of purpose
  • On change of data source

Strategic Recommendation

The DPIA should not be understood as a mere obligation, but as:

  • A governance instrument
  • A risk reduction mechanism
  • A trust-building element

Particularly for AI projects, a structured DPIA can significantly reduce liability risks.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Next Steps

  1. Assess your AI project for DPIA relevance.
  2. Use a structured DPIA template.
  3. Assess in particular discrimination and transparency risks.
  4. Integrate the DPIA into your AI governance system.
  5. Document and update regularly.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles