Overview
Not only developers of AI systems are subject to regulatory requirements. Companies, authorities or organisations that deploy an AI system are also classified as deployers under the EU AI Act -- and bear their own obligations.
This role is particularly relevant for companies that use AI systems from third-party providers (e.g. recruiting tools, credit scoring software or AI chatbots).
This article explains:
- Who qualifies as a deployer
- What obligations Art. 26--27 impose
- When deployers can become providers
- How deployer obligations differ from provider obligations
- What practical steps are necessary now
Who Is a "Deployer"?
A deployer is any natural or legal person who:
- Uses an AI system within their own area of responsibility
- Integrates it into business processes
- Makes decisions based on AI outputs
Examples:
- A company uses an AI recruiting system
- A bank deploys a credit scoring tool
- A university uses AI for exam grading
Important
The deployer is not necessarily the developer of the system. Using a third-party AI system also triggers deployer obligations.
Core Deployer Obligations (Art. 26)
Deployers of high-risk AI systems must in particular:
- Use the system in accordance with its intended purpose
- Ensure human oversight
- Guarantee input data quality
- Monitor system performance
- Report incidents
- Fulfil transparency obligations towards affected persons
Intended Use
The AI system may only be deployed:
- Within the intended scope of application
- In accordance with the manufacturer's documentation
- In compliance with the provided instructions
Deployment outside the intended purpose can:
- Lead to legal violations
- Increase liability risks
- Shift the deployer role to a provider role
Role Shift
If an AI system is substantially modified or used for unintended purposes, the deployer can legally become a provider.
Human Oversight
Deployers must ensure that:
- Qualified personnel oversee the system
- Intervention is possible
- Decisions can be reviewed
- Malfunctions are detected
In practice, this means:
- Training employees
- Clear responsibilities
- Defining intervention processes
Input Data Quality
Deployers bear responsibility for:
- Relevance of the input data
- Avoiding obvious biases
- Data currency
Example: If a credit scoring system is fed with erroneous customer data, the deployer bears the responsibility -- not the provider.
Monitoring & Performance Surveillance
Deployers must:
- Monitor system outputs
- Document anomalies
- Detect performance deviations
- Make adjustments when necessary
This is particularly relevant for:
- HR systems
- Financial decisions
- Educational systems
Reporting Obligations
Serious incidents or malfunctions must be:
- Reported to the provider
- Where applicable, reported to the competent market surveillance authorities
Examples:
- Systematic discrimination
- Security vulnerabilities
- Serious erroneous decisions
Fundamental Rights Impact Assessment (FRIA)
In certain cases, deployers must conduct a fundamental rights impact assessment.
This is particularly relevant for:
- Public authorities
- Large-scale deployment with significant fundamental rights implications
The FRIA assesses:
- Potential fundamental rights impacts
- Discrimination risks
- Transparency deficits
- Control mechanisms
Distinction from the GDPR
The FRIA is not identical to the Data Protection Impact Assessment (DPIA) under Art. 35 GDPR -- but may overlap in content.
Transparency towards Affected Persons
Depending on the area of deployment, affected persons must be informed about:
- Use of an AI system
- Type of decision support
- Possibilities for human review
This may coincide with GDPR information obligations.
When Does a Deployer Become a Provider?
A deployer can become a provider if they:
- Substantially modify the system
- Integrate their own AI components
- Distribute the system under their own brand
- Significantly change the intended purpose
In this case, all provider obligations additionally apply.
Provider vs. Deployer Distinction
| Provider | Deployer |
|---|---|
| Develops or places AI system on the market | Uses the AI system |
| Conducts conformity assessment | Deploys system in accordance with intended purpose |
| Creates technical documentation | Monitors input data & usage |
| Bears CE responsibility | Bears deployment responsibility |
Many companies hold both roles.
Connection to the GDPR
Deployers must additionally review:
- Art. 22 GDPR (automated decisions)
- Transparency obligations (Art. 13/14 GDPR)
- Data Protection Impact Assessment (Art. 35 GDPR)
- Third-country transfers for cloud AI
An isolated AI Act assessment is therefore insufficient.
Practical Implementation
Step 1 -- Role Clarification
- Are we exclusively deployers?
- Have we modified the system?
- Are we redistributing it?
Step 2 -- Deployment Analysis
- Does the deployment match the intended purpose?
- Have training sessions been conducted?
- Are monitoring processes defined?
Step 3 -- Documentation
- Internal guidelines for AI use
- Training records
- Monitoring protocols
- Incident reporting process
Step 4 -- Data Protection Interface
- Is a DPIA required?
- Are affected persons informed?
- Is transparency sufficient?
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.
Next Steps
- Identify your role in the AI ecosystem.
- Check whether your AI system is high-risk.
- Implement internal monitoring processes.
- Train responsible employees.
- Document all deployment and control measures.
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.