Explore the EU AI Act's intricacies through our practical guide, decode the legal complexities and receive actionable insights for a seamless implementation. From simplified governance principles to real-world examples, empower yourself to not only understand the rules but effectively apply them in the dynamic landscape of AI.
Our recently published report “Decoding the EU AI Act” dives into the way we will use and regulate artificial intelligence in the future.
This blog helps you navigate through the complex legal requirements of the EU AI Act and sheds light on tangible measures helping you with implementing it in your organization. Know the rules and how to apply them.
Know the rules – legal implications of the EU AI Act
Companies need to consider three main questions to assess the implications of the EU AI Act on their operations.
1. Scope – which role applies to me?
The EU AI Act applies to organizations that place or bring into service AI systems on the EU market, or where a deployed AI system affects EU citizens. The EU AI act distinguishes between four different roles:
- Provider/manufacturer: a natural or legal person or public authority, agency or other body that develops an AI system and intends to put it on the EU market.
- Importer: a natural or legal person located or established in the EU that places an AI system on the market under the trademark of a natural or legal person established outside the EU.
- Distributor: a natural or legal person in the supply chain, other than the provider or importer, who makes an AI system available on the EU market.
- Deployer: any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
It is important to note that roles are not fixed. Importers, Distributors and Deployers can be considered Provider if:
- the AI system is marketed under its own trademark
- the intended purpose of the AI system is modified
- major changes to the AI systems have been made.
Therefore, assess your role: Are you a provider, importer, distributor or deployer?
2. AI classification – what category does my AI system fall into?
The EU AI Act follows a risk-based approach. Depending on the risk classification, different obligations apply:
- Prohibited systems: these AI systems are prohibited because they pose a threat to people and their fundamental rights. Social scoring systems are an example of a prohibited sytem.
- High-risk systems: these AI systems can negatively affect the safety or fundamental rights and therefore need to comply with a comprehensive set of rules before being placed on the market.
- Limited-risk systems: these AI systems must comply with transparency requirements to ensure that users can make informed decisions.
- Low-risk systems: these AI systems don’t pose considerable risks and therefore don’t need to be formally regulated.
For this reason, it is critical to assess which risk category your AI system falls into – there are different legal obligations to comply with for each category.
3. Compliance requirements – which ones must I meet?
Depending on the role and the classification of the AI system being deployed, different compliance obligations apply, as shown in the table below for high risk AI systems: