Transparency Obligations under the EU AI Act

The transparency requirements of the EU AI Act for AI systems – labelling obligations, information rights and technical implementation.

15 July 20253 min read
EU AI ActTransparencyLabellingCompliance

Introduction

Transparency is one of the cornerstones of the EU AI Act. The regulation ensures that people know when they are interacting with AI and how AI systems make decisions.

Who Is Affected?

The transparency obligations apply to various actors:

  • Providers: Those who develop AI systems or place them on the market
  • Deployers: Those who deploy AI systems in their own operations
  • Importers and distributors: Those who import or distribute AI systems within the EU

Transparency Obligations by Risk Class

All AI Systems with Interaction

Basic transparency obligations apply to all AI systems that interact with natural persons:

  1. Disclose AI interaction: Users must be informed that they are interacting with an AI system
  2. Deepfake labelling: AI-generated or manipulated image, audio or video content must be labelled
  3. AI-generated text: Texts published for informational purposes must be identifiable as AI-generated

Implementation Example

A simple notice such as "This text was created with AI assistance" or an AI badge next to the chat window can already be sufficient. Visibility and comprehensibility are key.

High-Risk AI Systems

Additional transparency requirements:

  • Technical documentation describing how the system works
  • Instructions for use with information on performance and limitations
  • Logging of all relevant operations
  • Information about training data and validation methods

Building technical documentation?

From logging systems to instructions for use — Creativate AI Studio supports you with the technical implementation of all transparency requirements for high-risk AI.

Technical Implementation

Labelling in Practice

Recommended labelling methods:
├── Chatbots / Assistants
│   ├── Banner: "AI-powered assistant"
│   ├── Disclaimer before interaction
│   └── Persistent notice in the UI
├── Generated Content
│   ├── Watermarks (C2PA / Content Credentials)
│   ├── Metadata tags in files
│   └── Visible labelling
└── Decision Systems
    ├── Explanation of decision logic
    ├── Display confidence scores
    └── Show alternative options

Logging Requirements

For high-risk systems, the following data must be logged:

Data PointRetention
Time of useMin. 6 months
Input data (reference)According to purpose
Output dataMin. 6 months
System versionPermanent
Error eventsMin. 6 months
Human interventionsMin. 6 months

Data Protection Considerations

The logging obligations of the AI Act must be reconciled with the minimisation requirements of the GDPR. Only log what is necessary and protect personal data.

Transparency Checklist

  • AI systems inventoried and classified
  • User notices implemented for all interactive AI systems
  • Deepfake labelling set up
  • Technical documentation created for high-risk AI
  • Logging system implemented
  • Explainable AI methods evaluated
  • Employee training conducted

Outlook

Transparency obligations will be progressively tightened. Companies should start implementation now to be compliant by August 2026.

Need legal clarity?

For specific legal questions on the AI Act and GDPR, specialized legal advice focusing on AI regulation, data protection and compliance structures is available.

Independent legal advice. No automated legal information. The platform ai-playbook.eu does not provide legal advice.

Not sure where you stand?

If your AI use case does not clearly fit into a category, send us a brief description — we will point you in the right direction.

Related Articles