Feb 6, 2022
Inspection-ready from day one: what reviewers actually ask for under the AI Act
The EU Artificial Intelligence Act (AI Act) is no longer theoretical. As compliance timelines begin to tighten, every organization building or deploying AI systems needs to prepare for one thing: scrutiny. Whether it’s a regulator, a notified body, or an internal auditor, your reviewers will expect to see clear evidence that you understand where your AI systems fall under the Act — and that you can prove compliance, not just claim it. This article turns the AI Act’s requirements into a practical inspection checklist you can actually use from day one.

Stella Manga Chesnay
Lawyer | GDPR & AI Act
1. Scope and risk classification: where most teams misclassify
The AI Act starts with scope and classification — and this is where 70% of organizations stumble.
Article 6 and Annex III define which AI systems count as high-risk based on their intended purpose and the sector they operate in (e.g., recruitment, credit scoring, medical devices, or critical infrastructure).
The problem? Teams often self-classify based on technical architecture rather than intended use. Reviewers will want to see:
Your documented risk assessment showing how you mapped each AI use case to the AI Act’s categories.
Clear justification if you concluded a system is not high-risk (especially if it touches human decisions).
Links between your risk classification and your internal product governance workflow.
Treat this as your first inspection checkpoint. Without solid classification, everything downstream (policies, logs, audits) is on shaky ground.
2. Evidence pack: what “provable” compliance looks like
Once your risk classification is set, reviewers will expect an evidence pack that shows your system is under control. At a minimum, this includes:
Model cards documenting purpose, limitations, and performance metrics.
Policy versions showing updates over time (especially for model retraining, data sourcing, and human oversight).
Decision and override logs, which demonstrate traceability and accountability.
A RACI matrix mapping who is Responsible, Accountable, Consulted, and Informed for every compliance activity.
Together, these form the inspection backbone: the artifacts that prove compliance is not just an intention but a process.
3. DPIA vs. AI-specific audit: how they fit together
A common question: If we already do Data Protection Impact Assessments (DPIAs) under the GDPR, do we still need a separate AI audit?
Yes — but they should talk to each other.
The DPIA (under GDPR Articles 35–36) focuses on personal data risk. The AI-specific audit focuses on systemic risk — bias, robustness, transparency, and human oversight (as outlined in AI Act Articles 9–15). Reviewers will expect to see how your DPIA outputs feed into your AI risk management process and vice versa.
In practice, this means linking your DPIA to your AI model documentation, showing how personal data risks are mitigated alongside model-level risks.
4. Vendor and partner duties: shared responsibility by contract
AI compliance doesn’t stop at your firewall.
Under Articles 24–28, providers and deployers must ensure their vendors and partners (data providers, model suppliers, API integrators) uphold equivalent standards.
For inspection, this means maintaining contracts that define:
Data ownership, access rights, and permitted uses.
Incident reporting obligations and timeframes.
Roles in case of system failure or non-compliance.
Your reviewer will ask: Can you show me your supplier governance framework? If the answer isn’t immediate, it’s time to build that evidence trail.
5. The 10-day prep plan for a review
If you got a review notice tomorrow, could you be ready in 10 days? Here’s a condensed plan:
Day 1–2: Identify all AI systems in scope.
Day 3–4: Confirm classification and risk level.
Day 5–6: Assemble your evidence pack (model cards, policies, logs, RACI).
Day 7–8: Cross-check DPIA and AI audit coverage.
Day 9–10: Validate vendor contracts and reporting chains.
By the end, you’ll have a working “compliance binder” that can withstand inspection — and that you can iterate as regulation and guidance evolve.
From theory to trust
Compliance under the AI Act isn’t about producing documents; it’s about creating trust. Reviewers aren’t looking for perfection — they’re looking for evidence that you’ve operationalized governance.
Start small, document everything, and make your evidence pack your living audit trail. That’s how you stay inspection-ready from day one.
👉 Download the AI Audit Checklist from the European Data Protection Board (EDPB)
Turn these principles into a practical toolkit your team can follow.
