Overview
AI systems in human resources are increasingly deployed: automated CV screening, matching algorithms, video interview analysis, performance assessments and internal talent scoring. At the same time, HR is one of the most sensitive areas from a regulatory perspective, as decisions have immediate impacts on professional opportunities, income and career paths.
The EU AI Act explicitly classifies certain HR AI systems as high-risk -- particularly those used for recruiting, selection or performance assessment. Additionally, GDPR requirements apply (particularly Art. 22 GDPR), along with employment law provisions and anti-discrimination rules (e.g. the General Equal Treatment Act / AGG in Germany).
This article explains:
- Typical AI use cases in HR
- AI Act high-risk classification
- Discrimination and bias risks
- Art. 22 GDPR for automated decisions
- Role of the works council
- Practical implementation steps
1. Typical AI Use Cases in HR
Recruiting and Pre-Selection
- CV screening
- Profile matching
- Ranking of applicants
Video Interview Analysis
- Speech analysis
- Facial expression or voice parameters
- Behavioural pattern recognition
Performance Assessment
- KPI-based employee scoring
- Productivity analyses
- Potential forecasts
Internal Talent or Succession Planning
- Aptitude scoring
- Career path suggestions
High Intensity of Interference
HR decisions directly affect fundamental rights such as freedom of occupation, equal treatment and personality rights.
2. EU AI Act: High-Risk Classification in HR
The EU AI Act classifies AI systems as high-risk when they are used for:
- Selection or recruitment of individuals
- Assessment of applications
- Decisions on promotion or termination
- Performance or behavioural assessment
Example:
A system that automatically assesses or pre-sorts applicants typically falls under the high-risk category.
Supporting Systems
Even systems that formally "only support" can have high-risk character if they are de facto decision-shaping.
3. Discrimination Risks and Anti-Discrimination Law
HR AI carries significant risks of indirect discrimination.
Possible bias sources:
- Historical data (e.g. past personnel decisions)
- Proxy variables (postcode, educational background, employment gaps)
- Language patterns
Legal framework:
- General Equal Treatment Act (AGG)
- Fundamental rights
- Art. 9 GDPR for sensitive data
Proxy Risk
Even when sensitive characteristics are not explicitly captured, indirect correlations can have a discriminatory effect.
4. Art. 22 GDPR -- Automated Decisions in HR
Art. 22 is particularly relevant when:
- An application is rejected solely by automated means
- A scoring system significantly influences career opportunities
Affected individuals may have a right to:
- Human review
- Expression of their point of view
- Contestation
A merely formal "human confirmation" is insufficient if the decision is de facto automated.
5. Transparency Obligations
Employers must provide information about, among other things:
- Use of AI systems
- Data categories
- Assessment logic (in understandable form)
- Possible impacts
Particularly important:
- Transparency already in the application process
- Clarity about automated processing
6. Role of the Works Council
In Germany, the introduction of AI systems in HR frequently triggers co-determination rights.
Typical constellations:
- Performance monitoring
- Behavioural analysis
- Automated assessment systems
Early involvement can prevent conflicts.
7. DPIA in the HR Context
A Data Protection Impact Assessment is frequently required for:
- Systematic assessment of personal aspects
- Large-scale profiling
- Automated decisions with significant impacts
Typical risks:
- Discrimination
- Misclassification
- Lack of transparency
- Stigmatisation
8. Practical Implementation: HR AI Checklist
A) Scope and Classification
- Is the system used for recruiting or performance assessment?
- Does it fall under Annex III (high-risk)?
- Is there Art. 22 relevance?
B) Data Protection and Transparency
- Establish legal basis
- Update information obligations
- Define human oversight process
C) Bias and Fairness
- Review training data
- Define fairness metrics
- Conduct regular bias tests
D) Governance
- Involve the works council
- Implement documentation and logging
- Establish incident and complaint mechanisms
Common Sources of Error
| Error | Consequence |
|---|---|
| Fully automated rejection | Art. 22 risk |
| No bias analysis | Discrimination risk |
| Unclear transparency in the application process | GDPR violation |
| No works council involvement | Employment law risk |
| Missing high-risk classification | AI Act violation |
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.
Not sure where you stand?
If your AI use case does not clearly fit into a category, send us a brief description — we will point you in the right direction.
Next Steps
- Classify your HR AI system by AI Act risk level.
- Review Art. 22 relevance and define human oversight.
- Conduct bias tests and, if necessary, a DPIA.
- Update transparency and information obligations.
- Involve qualified experts early.
Need help implementing?
Work with Creativate AI Studio to design, validate and implement AI systems — technically sound, compliant and production-ready.
Need legal clarity?
For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.
Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.