The European Union’s Artificial Intelligence Act (AI Act) is the first serious and comprehensive attempt to regulate artificial intelligence at an international level. If you develop, sell or use AI in Europe, or plan to do so, now is the time to pay attention.
Whether you’re a startup building machine learning models or a company applying AI in customer services or internal decision-making, the AI Act introduces new responsibilities, especially if you work with high-risk systems.
Here’s what you need to know: who it applies to, what it requires and how you can start preparing.
What is the AI Act?
The AI Act is a regulation proposed by the European Commission to ensure that AI systems used in the EU are safe, transparent and respectful of fundamental rights.
Like the GDPR, it has extraterritorial effect: if you offer AI-based products or services in the European market, even if you operate from outside the EU, you will be subject to this regulation.
The law follows a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk (highly regulated), limited risk (requires transparency) and minimal or no risk (no relevant requirements).
Who will it affect?
The AI Act applies to a broad range of actors, including:
- Developers and providers of AI systems
- Companies that import or distribute AI solutions
- Organisations using AI in high-impact sectors
- Integrators and users deploying AI systems within the EU (even if developed elsewhere)
If your company uses AI in finance, healthcare, human resources, security, education or access to public services, you are very likely to fall under the high-risk category.
“If your AI can affect someone’s health, employment, credit or legal status, it probably falls under the AI Act.”
What is considered a high-risk system?
Common examples include:
- Credit scoring systems
- Automated recruitment and CV-screening tools
- AI for school admissions or grading
- Medical diagnostic tools
- Biometric recognition (facial, fingerprint, etc.)
- Predictive policing or prioritisation tools in public services
These systems must comply with strict requirements, such as:
- Impact assessments and risk analyses
- Effective human oversight
- Detailed technical documentation
- Transparency for users
- Post-deployment monitoring
- CE marking (similar to electronic products)
What about generative or general-purpose models?
The final version of the law includes specific obligations for foundational models and general-purpose AI systems, such as LLMs (Large Language Models) and image generators.
These models must be properly labelled and document their training data, limitations and performance. Providers of such systems will assume different responsibilities depending on their scale and capabilities.
If your company uses tools like GPT, Claude or open-source models, the AI Act may apply to how you integrate, supervise and explain them to users.
Penalties and supervision
As with the GDPR, the AI Act introduces significant fines for non-compliance:
- Up to €35 million or 7% of global turnover for the most serious infringements
- Lower penalties for documentation, registration or transparency failures
Each Member State will designate a competent authority. An EU-level AI Office will also be created to coordinate oversight and issue guidance.
How can you prepare?
Start by mapping your use cases. Where are you using AI? How does it impact employees, customers or critical processes?
Next, identify which models may fall under the high-risk or general-purpose categories. If they do, it’s time to review your documentation, impact assessments and control mechanisms.
Standards such as ISO 42001 can help structure your AI governance and reduce adaptation costs.
“The AI Act is already in force, and companies that prepare early will have the advantage.”
Conclusion
The EU’s Artificial Intelligence Act marks the beginning of a new era; one in which building AI responsibly is no longer optional but mandatory.
At PrivaLex, we help companies understand how the AI Act affects them, adapt their processes and align their strategy with frameworks like ISO 42001. Whether you develop AI or integrate it, we prepare you with clarity and confidence.