Artificial Intelligence (AI) is offering new benefits to society and businesses, aiming to reshape workplaces and key industries. The push to harness the transformative potential of AI and automation is underway. However, amidst the global proliferation of AI in business and daily life, concerns about ethical use and risks emerge. Trust issues persist; in the Trust in artificial intelligence global study, three in five people express wariness about AI systems, leading 71 percent to expect regulatory measures.
In response, the European Union (EU) has made significant strides with a provisional agreement on the groundbreaking Artificial Intelligence Act (AI Act), which is anticipated to set a new global standard for AI regulation. Envisioned to become law in 2024, with most AI systems needing to comply by 2026, the AI Act takes a risk-based approach to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability.
The EU's AI Act aims to strike a delicate balance, fostering AI adoption while upholding individuals' rights to responsible, ethical, and trustworthy AI use. This paper explores the potential impact of the AI Act on organisations, delving into its structure, obligations, compliance timelines, and suggesting an action plan for organisations to consider.
Decoding the EU AI Act
Discover the potential impact of the AI Act to your organisation.
Read the full PDF (1.8 MB) ⤓
EU AI Act Overview
AI holds immense promise to expand the horizon of what is achievable and to impact the world for our benefit — but managing AI’s risks and potential known and unknown negative consequences will be critical. The AI Act is set to be finalised in 2024 and aims to ensure that AI systems are safe, respect fundamental rights, foster AI investment, improve governance, and encourage a harmonised single EU market for AI.
The AI Act's definition of AI is anticipated to be broad and include various technologies and systems. As a result, organisations are likely to be significantly impacted by the AI Act. Most of the obligations are expected to take effect in early 2026. However, prohibited AI systems will have to be phased out six months after the AI Act comes into force. The rules for governing general-purpose AI are expected to apply in early 2025.1
The AI Act applies a risk-based approach, dividing AI systems into different risk levels: unacceptable, high, limited and minimal risk.2
High-risk AI systems are permitted but subject to the most stringent obligations. These obligations will affect not only users but also so-called ‘providers’ of AI systems. The term ‘provider’ in the AI Act covers developing bodies of AI systems, including organisations that develop AI systems for strictly internal use. It is important to know that an organisation can be both a user and a provider.
Providers will likely need to ensure compliance with strict standards concerning risk management, data quality, transparency, human oversight, and robustness.
Users are responsible for operating these AI systems within the AI Act’s legal boundaries and according to the provider's specific instructions. This includes obligations on the intended purpose and use cases, data handling, human oversight and monitoring.
New provisions have been added to address the recent advancements in general-purpose AI (GPAI) models, including large generative AI models.3 These models can be used for a variety of tasks and can be integrated into a large number of AI systems, including high-risk systems, and are increasingly becoming the basis for many AI systems in the EU. To account for the wide range of tasks AI systems can accomplish and the rapid expansion of their capabilities, it was agreed that GPAI systems, and the models they are based on, may have to adhere to transparency requirements. Additionally, high-impact GPAI models, which possess advanced complexity, capabilities, and performance, will face more stringent obligations. This approach will help mitigate systemic risks that may arise due to these models' widespread use.4
Existing Union laws, for example, on personal data, product safety, consumer protection, social policy, and national labor law and practice, continue to apply, as well as Union sectoral legislative acts relating to product safety. Compliance with the AI Act will not relieve organisations from their pre-existing legal obligations in these areas.
Organisations should take the time to create a map of the AI systems they develop and use and categorise their risk levels as defined in the AI Act. If any of their AI systems fall into the limited, high or unacceptable risk category, they will need to assess the AI Act’s impact on their organisation. It is imperative to understand this impact — and how to respond — as soon as possible.