AI regulatory compliance is the uncomfortable question made practical: if you already meet privacy and security expectations, what changes when risk sits in models, training data, and automated decisions? The EU has tightened the frame with the AI Act alongside existing law that bites harder whenever you process personal data or run business-critical systems.
In 2026, parts of the AI Act are already enforceable, prohibitions since February 2025, GPAI obligations since August 2025, with high-risk system requirements phasing in through 2026 and 2027. Proving compliance matters as much as building it. Enterprise buyers, supply-chain audits, and frameworks such as the NIS2 Directive and DORA push AI out of side projects and into documented governance.
Below is an actionable map: which laws and standards fit together, how to inventory use cases, what evidence to prepare, and how to line up ISO 42001 with AI Act duties without running duplicate programmes.
Regulatory stack: GDPR, AI Act, and standards for AI systems
You rarely face “one AI law only”. You face overlap.
Data protection. The GDPR still anchors personal data: lawful basis, minimisation, transparency, DPIAs where risk is high, and demonstrable technical and organisational measures. AI that profiles people or infers attributes does not sit outside that frame: you still need clear purpose, proportionality, and human oversight where it applies.
EU AI Act. The Regulation (EU) 2024/1689 tiers obligations by risk: unacceptable (prohibited), high (strict requirements), limited (transparency duties), and minimal or no risk (no specific obligations). It also adds a cross-cutting category for general-purpose AI models (GPAI), with their own transparency duties and, for the most capable models, systemic-risk assessments.
Market-driven standards. ISO/IEC 42001:2023 gives a management system for AI: policy, impact assessment, risk treatment, and improvement. It does not replace statute, but it structures the work you later map to the AI Act and to customer audits.
Contracts and cyber. Cloud DPAs, processor clauses, and security programmes (NIS2-style duties where they apply, internal policies) shape how you deploy models and what records you hold when something breaks.
A useful AI compliance guide turns that map into owners, timelines, and proof—not a list of articles with no owner.
Inventory and classification: without this, the rest is opinion
Before long policy decks, build a living inventory of use cases: what the system does, which data it uses, who maintains it, whether there is automated decision-making with legal or similar effects, and whether the model is in-house, fine-tuned, or vendor-hosted.
Three choices cut legal uncertainty:
- Purpose and boundaries. Write what the AI is for, and what is explicitly out of scope, so feature creep does not destroy your audit story.
- AI Act risk band. Classify conservatively. If you sit between two levels, document the analysis and conclusion. That trail is what regulators and B2B customers usually ask for first.
- Personal data or not. If personal data is in play, you are back to GDPR tooling (DPIA, records of processing, controls). If not, you may still owe traceability and model quality evidence under contract or internal standards.
Later, the Privalex section spells out how we support this work through what we publish and deliver on the site.
Governance: policies, roles, and evidence across the lifecycle
AI governance is not a PDF on a shelf. It is a cycle tied to design, deployment, monitoring, and retirement.
Patterns that work in mid-sized and larger organisations:
- System owner who can set limits and escalate to a compliance forum.
- Acceptance criteria before production: tests, known bias limits, version logs, and vendor dependency notes.
- Human oversight where the AI Act or your own risk model requires it: who reviews, when, and with what SLA.
- Incident handling that covers model failure, unsafe outputs, or training-data leaks, linked to GDPR breach playbooks where relevant.
Support on AI compliance and governance should tie into privacy, security, and vendor management so legal and engineering are not working in silos.
ISO 42001 and the AI Act: one thread, not two audit programmes
Running parallel audits per standard is expensive. ISO 42001 organises policy, organisational impact assessment, AI-related risk treatment, and continual improvement. Those same building blocks feed much of what the AI Act expects on documentation, data quality, and oversight. Post-market monitoring, however, is an explicit legal duty under the AI Act for high-risk system providers (Art. 72), not merely a principle the ISO reflects.
A sensible path:
- Define scope for the AI management system, which units and which system types are in.
- Cross-walk ISO 42001 controls with AI Act articles and with GDPR DPIAs and policies where personal data exists.
- Unify evidence: one change log, one review cadence, one test report store, with cross-references.
For the regulation itself, our post on the EU Artificial Intelligence Act summarises timelines and risk logic; you can cross-walk the same story with an ISO 42001 management system without running two unrelated audit tracks.
Operational checklist before you ship or materially change an AI system
Use this as a reasonable minimum before you open a model to internal or external users:
- Purpose and limits in writing, signed off by the system owner.
- Risk assessment covering personal data, discrimination, security, and vendor reliance.
- Documented tests with archived results (model version included).
- Human oversight plan where the system type and risk level require it.
- Usage instructions and guardrails for the team operating the product.
- Retirement or swap plan if the model no longer meets policy or the vendor changes terms.
For systems that process personal data, add what your GDPR programme already expects: lawful basis, transparency, minimisation, and security measures aligned with ISO 27001 if that is your corporate baseline.
What auditors and customers actually look for
Reviewers are not satisfied with “we use AI”. They want a decision chain: who approved what, on which data, with which controls, and what happens when things fail.
Keep one coherent pack: inventory, AI Act risk classification, change records, test outcomes, model vendor contracts, and—where relevant—DPIAs and supplementary measures. That pack is what aligns sales narrative with the AI Act and with partners that already operate under strict frameworks.
What Privalex offers (and how it fits regulated AI)
Privalex is a consultancy focused on certifications, regulatory compliance, and data protection: we are not a generic law firm or a back-office agency. We work with compliance leads, CISOs, legal, and product so obligations stay demonstrable, auditable, and useful for the business.
In practice we cover the arc that usually appears when AI ships in a product or a process: GDPR-aligned privacy and information security, external DPO support when you need a dedicated role without hiring in-house, cloud and vendor-heavy setups, and a dedicated thread on AI compliance and governance (policy, risk, evidence, legal–engineering coordination). O
n European frameworks and certifications we help teams line up the AI Act, NIS2, DORA, the Data Act, ISO 27001, ISO 27701, ISO 42001, ENS, or SOC 2-aligned readiness depending on sector and contract language, you rarely need the full catalogue, only the slice your market demands.
Where it matters we also support brand protection, corporate and IP legal work, and HIPAA-aligned programmes for health-related data. Training programmes and initial gap reviews sit in the same approach: compliance that teams can run, not a one-off PDF in a shared drive.
Frequently asked questions
Does ISO 42001 replace AI Act compliance?
No. ISO 42001 is a management-system standard; the AI Act is law with duties, timelines, and penalties. What ISO does well is organise roles and evidence so demonstrating AI Act compliance is more direct.
If we only use ChatGPT internally, do the same rules apply?
It depends on use case, personal data, and whether the system feeds product or decisions that affect third parties. Occasional use with non-sensitive data is not the same as an automated flow on customer data. Inventory and risk classification prevent guesswork.
How does this relate to GDPR?
Directly. If the AI processes personal data, GDPR sets lawful basis, transparency, DPIAs where required, and security measures. The AI Act adds layers on the AI system itself; many projects run both in parallel.
Do we need ISO 42001 certification to sell in the EU?
Not as a general legal requirement. It is often a market and efficiency choice: credibility with buyers and cleaner internal and external audits. Legal pressure comes from the AI Act, GDPR, and sector rules—not from the ISO badge alone.
Where can I read official guidance on AI and personal data?
The European Data Protection Board and national supervisors publish guidance; pair the AI Act and GDPR texts with those materials for operational interpretation.
What timeline should we plan for?
Prioritise inventory and classification in weeks, not endless quarters, then iterate: a complete first register beats a perfect policy nobody runs. AI compliance is continuous, especially when models or vendors change.
Free webinar on 20 May: Get audit-ready for NIS2, ISO 27001 and ENS with PrivaLex & Factorial IT.
View webinar