Overview
Art. 25 GDPR obliges controllers to integrate data protection not retroactively, but from the outset into systems and processes. This principle is known as Privacy by Design. Additionally, Privacy by Default requires that privacy-friendly settings are activated by default.
For AI systems, this article is particularly relevant, as training data, model architecture and decision logic are already determined during the development phase.
This article explains:
- The requirements of Art. 25 GDPR
- The seven foundational principles of Privacy by Design
- Technical implementation approaches for AI systems
- The difference between Design and Default
- A practical checklist for AI developers
Art. 25 GDPR -- Legal Framework
Controllers must:
- take appropriate technical and organisational measures
- take into account the state of the art
- consider implementation costs and risks
- effectively implement data protection principles
Early Integration
Privacy by Design is not merely a documentation obligation -- it is an architectural decision.
The Seven Principles according to Cavoukian
- Proactive not reactive
- Privacy as the default setting
- Privacy embedded into design
- Full functionality (no "either-or")
- End-to-end security
- Transparency
- User-centricity
These principles are not enshrined word-for-word in the law, but serve as a recognised reference framework.
Privacy by Design in the AI Context
Data Collection
- Minimisation of collected data
- Clear purpose definition
- Avoidance of unnecessary sensitivity
Training Data
- Representativeness assessment
- Bias analysis
- Documented selection criteria
Model Architecture
Technical measures may include:
- Federated Learning
- Differential Privacy
- On-device processing
- Encryption during training and inference
Access Controls
- Role-based access systems
- Separation of training and production data
- Access logging
Transparency Mechanisms
- Model cards
- Data provenance documentation
- Versioning
Privacy by Default
By default, only data that is:
- necessary for the respective purpose
may be processed.
Examples:
- Tracking features deactivated by default
- Profiling only after activation
- Minimal data storage as the default setting
Avoid Misconfiguration
Data protection must not depend on an active user decision to be established.
Technical Measures for AI Systems
| Measure | Objective |
|---|---|
| Pseudonymisation | Reduction of identifiability |
| Encryption | Protection against unauthorised access |
| Differential Privacy | Protection of individual data traces |
| Federated Learning | Data remain local |
| Access Controls | Limitation of internal use |
Connection to the EU AI Act
Privacy by Design complements:
- Risk management obligations
- Data governance requirements
- Transparency obligations
Both legal frameworks require a structured development process.
Practical Checklist for AI Developers
Conception Phase
- Document purpose definition
- Define data categories
- Identify sensitive data
Development Phase
- Implement minimisation strategy
- Conduct bias tests
- Integrate security measures
Implementation Phase
- Review default settings
- Create transparency information
- Set up access controls
Operational Phase
- Regular review
- Establish incident management
- Document versioning
Common Errors
| Error | Risk |
|---|---|
| Data protection considered only after development | System modifications necessary |
| Unclear purpose definition | Purpose limitation violation |
| Missing documentation | Violation of accountability |
| Comprehensive data collection by default | Privacy by Default violation |
Governance Recommendation
Privacy by Design should:
- be part of the product development process
- be integrated into architectural decisions
- be audited regularly
Interdisciplinary collaboration (legal, technical, compliance) is essential.
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Something novel or high-risk?
Collaborate with Creativate AI Studio and a network of industry experts and deep-tech researchers to explore, prototype and validate advanced AI systems.
Next Steps
- Integrate data protection requirements into your development guidelines.
- Implement privacy-friendly default settings.
- Document technical protective measures in a comprehensible manner.
- Conduct regular data protection reviews.
- Assess high-risk or DPIA relevance.
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.