These are the key topics covered in this guide:
- What is the EU AI Act?
- Who does it apply to?
- The four risk categories explained
- What high-risk AI systems must comply with
- Obligations for general-purpose AI (GPAI) model providers
- The enforcement timeline: what is already in force
- Penalties for non-compliance
- Common mistakes organisations make
- How PrivaLex can help
The EU Artificial Intelligence Act (AI Act) entered into force on 1 August 2024, making it the world’s first comprehensive legal framework for AI. It is not a future requirement. Several of its obligations are already in force, and the most significant deadlines for most organisations, including those using AI in high-risk contexts, arrive in August 2026. If your organisation develops, deploys, imports or uses AI systems in the EU, this regulation applies to you. Here is what you need to understand.
What Is the EU AI Act?
The AI Act is a directly applicable EU regulation that establishes harmonised rules for the development, placement on the market and use of artificial intelligence systems across the European Union. Like the GDPR, it has extraterritorial effect: if you place AI systems on the EU market or your AI outputs are used in the EU, the regulation applies regardless of where your organisation is based.
The Act follows a risk-based approach, classifying AI systems into four categories based on the level of risk they pose to health, safety and fundamental rights. The obligations imposed on each category scale with that risk level, from outright prohibition to transparency requirements to full conformity assessment.
Who Does the AI Act Apply To?
The AI Act applies to a broad range of actors in the AI value chain. The collective term used in the regulation is operators, which includes:
- Providers, organisations or individuals that develop an AI system or general-purpose AI model and place it on the market or put it into service under their own name or trademark.
- Deployers, organisations or individuals that use an AI system under their own authority in a professional context.
- Importers, organisations established in the EU that place on the EU market an AI system bearing the name or trademark of a natural or legal person established outside the EU.
- Distributors, organisations in the supply chain, other than the provider or importer, that make an AI system available on the EU market.
- Product manufacturers, organisations that place on the market or put into service an AI system together with their product under their own name or trademark.
If your organisation uses AI in finance, healthcare, HR, education, critical infrastructure or public services, you are very likely to fall under the high-risk category. If you build, fine-tune or deploy large language models or other general-purpose AI systems, the GPAI obligations apply to you directly.
The Four Risk Categories
Unacceptable Risk: Prohibited
These AI systems are banned outright and have been prohibited since 2 February 2025. They include AI that uses subliminal or manipulative techniques to distort human behaviour, social scoring systems that evaluate individuals based on their behaviour over time, predictive policing tools that profile individuals to assess their likelihood of committing crimes, and real-time remote biometric identification systems in publicly accessible spaces (with narrow law enforcement exceptions).
High Risk: Strictly Regulated
These are systems that pose significant risks to health, safety or fundamental rights. They are listed in Annex III of the AI Act and include:
- Credit scoring and financial risk assessment systems
- Automated recruitment tools and CV-screening systems
- AI used in educational admissions or assessment
- Medical diagnostic and clinical decision-support tools
- Biometric identification and categorisation systems
- AI used in access to public services and benefits
- AI in critical infrastructure management (energy, water, transport)
- Law enforcement tools including risk assessment for individuals
High-risk AI systems must meet detailed compliance obligations before being placed on the market. These requirements apply fully from 2 August 2026 for most standalone high-risk systems.
Limited Risk: Transparency Obligations
This category covers systems with specific transparency duties under Article 50 of the Act. The most relevant examples are chatbots and AI systems that interact directly with users, and AI-generated content including images, audio and video (deepfakes). Providers must ensure users are informed that they are interacting with an AI system or that content is AI-generated. These transparency obligations become fully applicable on 2 August 2026.
Minimal or No Risk
The vast majority of AI systems currently in use fall into this category, for example AI-enabled spam filters, video game AI and recommendation engines. No specific obligations apply under the AI Act, though general EU law (including the GDPR where personal data is involved) continues to apply.
What High-Risk AI Systems Must Comply With
Providers and deployers of high-risk AI systems must meet a comprehensive set of requirements before and after placing their system on the market:
- Risk management system: a continuous process to identify, analyse and mitigate risks associated with the AI system throughout its lifecycle.
- Data governance: training, validation and testing data must meet quality criteria and be relevant, representative and free of errors.
- Technical documentation: comprehensive documentation demonstrating compliance, kept up to date and made available to national authorities on request.
- Automatic logging (record-keeping): systems must generate logs enabling post-market monitoring and investigation of incidents.
- Transparency to deployers: providers must supply instructions for use that enable deployers to understand and use the system appropriately.
- Human oversight: systems must be designed to enable human review, intervention and override during operation.
- Accuracy, robustness and cybersecurity: systems must perform consistently, resist manipulation and maintain performance over time.
- Conformity assessment: most high-risk systems require a formal conformity assessment (some by a notified body) before a CE marking can be applied and the system placed on the market.
- Registration: providers must register high-risk AI systems in the EU database before placing them on the market.
Standards such as ISO 42001 (AI management systems) can provide a structured framework to meet many of these requirements efficiently, particularly around risk management, data governance and documentation.
Obligations for General-Purpose AI (GPAI) Model Providers
The AI Act introduced a specific chapter for general-purpose AI (GPAI) models, large, versatile AI models such as large language models (LLMs) and image generators that can be used for a wide range of tasks. These obligations have been in force since 2 August 2025.
All GPAI model providers must:
- Maintain technical documentation demonstrating how the model was developed, tested and evaluated.
- Publish a public summary of training content using the European Commission’s standard template, covering the types of data used to train the model.
- Comply with EU copyright law, including by implementing policies to respect opt-outs from text and data mining.
- Provide model cards (information to downstream providers and deployers about what the model is designed to do and its limitations).
Providers of GPAI models with systemic risk (those with very high training compute or wide deployment) face additional obligations: adversarial testing, serious incident reporting to the EU AI Office, and energy efficiency disclosure. The GPAI Code of Practice, published in July 2025, is a voluntary compliance tool that helps providers demonstrate how they meet these requirements.
GPAI models already on the market before 2 August 2025 have a transition window until 2 August 2027 to achieve full compliance, but providers must demonstrate they are actively taking the necessary steps.
The Enforcement Timeline: What Is Already in Force
Understanding exactly what applies when is essential for planning. Here is the confirmed timeline:
- 1 August 2024, AI Act enters into force.
- 2 February 2025, Prohibited AI practices banned. AI literacy obligations apply. Non-compliance can attract penalties.
- 2 August 2025, GPAI obligations apply. Governance rules and AI Office fully operational. Penalty regime enters into effect (fines for most violations now applicable). National competent authorities must be designated.
- 2 August 2026, AI Act becomes generally applicable. High-risk AI obligations (Annex III) apply. Transparency rules (Article 50) apply. Commission enforcement powers for GPAI model providers enter into application.
- 2 August 2027, Rules for high-risk AI systems embedded in regulated products (Annex I) apply. GPAI models placed on the market before August 2025 must be fully compliant by this date.
The practical implication: if your organisation deploys or develops high-risk AI systems, August 2026 is your key deadline. If you provide GPAI models, you are already subject to obligations and should be acting now.
Penalties for Non-Compliance
The AI Act introduced a tiered penalty structure. As of 2 August 2025, the following fines are applicable:
- Up to €35 million or 7% of global annual turnover (whichever is higher) for infringements relating to prohibited AI practices or non-compliance with GPAI systemic risk obligations.
- Up to €15 million or 3% of global annual turnover for other violations of the Act, including obligations for high-risk AI providers.
- Up to €7.5 million or 1% of global annual turnover for providing incorrect, incomplete or misleading information to authorities.
Enforcement at national level is carried out by national competent authorities designated by each EU member state. At EU level, the AI Office, fully operational since August 2025, oversees GPAI model compliance and coordinates enforcement across the bloc.
Common Mistakes Organisations Make
Assuming the AI Act does not apply because you only use AI, not develop it
Deployers, organisations that use AI systems in a professional context, have specific obligations under the Act, particularly for high-risk systems. Using a third-party AI tool for recruitment, credit assessment or employee monitoring does not exempt you from compliance requirements.
Treating risk classification as optional
The risk category your AI systems fall into determines your obligations. Failing to conduct a structured classification exercise means you may be operating high-risk systems without the required documentation, oversight mechanisms or conformity assessments in place.
Waiting until August 2026 to start preparing
Conformity assessments, technical documentation, risk management systems and human oversight mechanisms all take time to implement properly. Organisations that start their preparation now will be in a significantly better position than those that wait until the deadline is imminent.
Overlooking GPAI obligations if you use foundation models
If your organisation fine-tunes, hosts or distributes a GPAI model, including open-source models, you may have provider-level obligations under the Act. The Commission’s guidelines on GPAI obligations clarify that only those making significant modifications to a model inherit provider obligations, minor adjustments do not. But the line requires careful analysis.
Treating AI governance as separate from existing compliance frameworks
AI governance does not exist in isolation. High-risk AI systems that process personal data are subject to both the AI Act and the GDPR simultaneously. Data protection impact assessments, records of processing and breach response procedures all intersect with AI Act requirements. Integrating your AI governance with your existing privacy and security compliance programme is more efficient and more robust than running them separately.
How PrivaLex Can Help
At PrivaLex Partners we help organisations understand how the AI Act applies to their specific use cases and build a compliance programme that is practical, proportionate and ready for enforcement.
Our support covers:
- AI system inventory and risk classification: mapping your AI use cases against the Act’s risk tiers and identifying which obligations apply.
- Gap assessment: reviewing your current documentation, governance structures and technical controls against AI Act requirements.
- ISO 42001 implementation: building an AI management system aligned to the international standard, which provides a strong foundation for AI Act compliance.
- GPAI compliance support: technical documentation, training data summaries and copyright compliance measures for GPAI model providers.
- Policy and process development: AI governance policies, human oversight procedures, incident reporting protocols and transparency documentation.
- Integration with GDPR and NIS2: ensuring your AI compliance programme is aligned with your existing data protection and cybersecurity obligations.
- Team training: AI literacy programmes that meet the Act’s requirements and ensure your teams understand their responsibilities.
Book a call with PrivaLex to assess how the AI Act applies to your organisation and what a realistic compliance roadmap looks like.
Frequently Asked Questions (FAQs)
Is the EU AI Act already in force?
Yes. The AI Act entered into force on 1 August 2024. Several obligations are already active: prohibited AI practices have been banned since 2 February 2025, GPAI obligations have applied since 2 August 2025, and the penalty regime is operational. The main obligations for high-risk AI systems apply from 2 August 2026.
Does the AI Act apply to my company if we are based outside the EU?
Yes, if your AI systems are placed on the EU market or their outputs are used within the EU. The Act has extraterritorial effect comparable to the GDPR. Providers outside the EU that offer AI systems to EU users or organisations must designate an authorised representative established in the EU.
How do I know if my AI system is high-risk?
High-risk AI systems are listed in Annex III of the AI Act. They cover eight domains: biometric identification, critical infrastructure, education, employment, essential services (credit, benefits), law enforcement, migration and administration of justice. If your AI system falls within one of these domains and performs one of the listed functions, it is high-risk. The Commission is also developing guidance to clarify borderline cases. A structured classification exercise, which PrivaLex can support, is the right starting point.
What is a general-purpose AI model and does the Act apply to mine?
A general-purpose AI (GPAI) model is an AI model trained on large amounts of data that can perform a wide range of tasks. Large language models such as GPT, Claude or Llama are typical examples. If your organisation provides (develops, fine-tunes significantly, or distributes) a GPAI model, you have obligations under the Act that have been in force since August 2025. If you use a GPAI model as a deployer without significant modification, you are treated as a deployer rather than a provider.
Does the AI Act overlap with the GDPR?
Yes, significantly. High-risk AI systems that process personal data must comply with both the AI Act and the GDPR simultaneously. Requirements around data quality, data minimisation, documentation and impact assessments overlap and reinforce each other. A Data Protection Impact Assessment (DPIA) is often required under the GDPR for high-risk AI processing, independently of the AI Act’s own conformity assessment requirements. PrivaLex can help you integrate both into a single, coherent compliance programme.
What is ISO 42001 and how does it relate to the AI Act?
ISO 42001 is the international standard for AI management systems. It provides a structured framework for governing AI risk, documentation, oversight and continuous improvement that maps closely onto the AI Act’s requirements for high-risk systems. Implementing ISO 42001 is not legally required under the AI Act, but it provides a strong, auditable foundation for compliance and can significantly reduce the effort required to meet the Act’s documentation and risk management obligations.
What should we be doing right now?
The most important immediate steps are: map all AI systems in use across your organisation, classify each by risk tier, assess whether any GPAI model obligations apply to systems your organisation provides or modifies, and start building your governance documentation for any high-risk systems ahead of the August 2026 deadline. If you are unsure where to start, contact PrivaLex for an initial assessment.
Next Step
The AI Act is not a future problem. Prohibited AI has been banned since February 2025. GPAI obligations are active. High-risk deadlines arrive in August 2026. The organisations that will navigate this most smoothly are those that start their classification and documentation work now, not in the months before enforcement begins. Book a call with PrivaLex to understand exactly where your organisation stands and what a proportionate, practical AI Act compliance programme looks like for your business.
