Deployer Obligations under Art. 26–27 EU AI Act

What obligations apply to deployers of AI systems under the EU AI Act? Overview of intended use, human oversight, data quality, monitoring and fundamental rights impact assessment.

11 February 20265 min read
EU AI ActDeployerHigh-Risk AIFundamental Rights Impact AssessmentCompliance

Overview

Not only developers of AI systems are subject to regulatory requirements. Companies, authorities or organisations that deploy an AI system are also classified as deployers under the EU AI Act -- and bear their own obligations.

This role is particularly relevant for companies that use AI systems from third-party providers (e.g. recruiting tools, credit scoring software or AI chatbots).

This article explains:

  • Who qualifies as a deployer
  • What obligations Art. 26--27 impose
  • When deployers can become providers
  • How deployer obligations differ from provider obligations
  • What practical steps are necessary now

Who Is a "Deployer"?

A deployer is any natural or legal person who:

  • Uses an AI system within their own area of responsibility
  • Integrates it into business processes
  • Makes decisions based on AI outputs

Examples:

  • A company uses an AI recruiting system
  • A bank deploys a credit scoring tool
  • A university uses AI for exam grading

Important

The deployer is not necessarily the developer of the system. Using a third-party AI system also triggers deployer obligations.

Core Deployer Obligations (Art. 26)

Deployers of high-risk AI systems must in particular:

  1. Use the system in accordance with its intended purpose
  2. Ensure human oversight
  3. Guarantee input data quality
  4. Monitor system performance
  5. Report incidents
  6. Fulfil transparency obligations towards affected persons

Intended Use

The AI system may only be deployed:

  • Within the intended scope of application
  • In accordance with the manufacturer's documentation
  • In compliance with the provided instructions

Deployment outside the intended purpose can:

  • Lead to legal violations
  • Increase liability risks
  • Shift the deployer role to a provider role

Role Shift

If an AI system is substantially modified or used for unintended purposes, the deployer can legally become a provider.

Human Oversight

Deployers must ensure that:

  • Qualified personnel oversee the system
  • Intervention is possible
  • Decisions can be reviewed
  • Malfunctions are detected

In practice, this means:

  • Training employees
  • Clear responsibilities
  • Defining intervention processes

Input Data Quality

Deployers bear responsibility for:

  • Relevance of the input data
  • Avoiding obvious biases
  • Data currency

Example: If a credit scoring system is fed with erroneous customer data, the deployer bears the responsibility -- not the provider.

Monitoring & Performance Surveillance

Deployers must:

  • Monitor system outputs
  • Document anomalies
  • Detect performance deviations
  • Make adjustments when necessary

This is particularly relevant for:

  • HR systems
  • Financial decisions
  • Educational systems

Reporting Obligations

Serious incidents or malfunctions must be:

  • Reported to the provider
  • Where applicable, reported to the competent market surveillance authorities

Examples:

  • Systematic discrimination
  • Security vulnerabilities
  • Serious erroneous decisions

Fundamental Rights Impact Assessment (FRIA)

In certain cases, deployers must conduct a fundamental rights impact assessment.

This is particularly relevant for:

  • Public authorities
  • Large-scale deployment with significant fundamental rights implications

The FRIA assesses:

  • Potential fundamental rights impacts
  • Discrimination risks
  • Transparency deficits
  • Control mechanisms

Distinction from the GDPR

The FRIA is not identical to the Data Protection Impact Assessment (DPIA) under Art. 35 GDPR -- but may overlap in content.

Transparency towards Affected Persons

Depending on the area of deployment, affected persons must be informed about:

  • Use of an AI system
  • Type of decision support
  • Possibilities for human review

This may coincide with GDPR information obligations.

When Does a Deployer Become a Provider?

A deployer can become a provider if they:

  • Substantially modify the system
  • Integrate their own AI components
  • Distribute the system under their own brand
  • Significantly change the intended purpose

In this case, all provider obligations additionally apply.

Provider vs. Deployer Distinction

ProviderDeployer
Develops or places AI system on the marketUses the AI system
Conducts conformity assessmentDeploys system in accordance with intended purpose
Creates technical documentationMonitors input data & usage
Bears CE responsibilityBears deployment responsibility

Many companies hold both roles.

Connection to the GDPR

Deployers must additionally review:

  • Art. 22 GDPR (automated decisions)
  • Transparency obligations (Art. 13/14 GDPR)
  • Data Protection Impact Assessment (Art. 35 GDPR)
  • Third-country transfers for cloud AI

An isolated AI Act assessment is therefore insufficient.

Practical Implementation

Step 1 -- Role Clarification

  • Are we exclusively deployers?
  • Have we modified the system?
  • Are we redistributing it?

Step 2 -- Deployment Analysis

  • Does the deployment match the intended purpose?
  • Have training sessions been conducted?
  • Are monitoring processes defined?

Step 3 -- Documentation

  • Internal guidelines for AI use
  • Training records
  • Monitoring protocols
  • Incident reporting process

Step 4 -- Data Protection Interface

  • Is a DPIA required?
  • Are affected persons informed?
  • Is transparency sufficient?

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Next Steps

  1. Identify your role in the AI ecosystem.
  2. Check whether your AI system is high-risk.
  3. Implement internal monitoring processes.
  4. Train responsible employees.
  5. Document all deployment and control measures.

Need help implementing?

Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Related Articles