The transformative potential of artificial intelligence (AI) and automation is currently being intensively discussed and utilised. AI offers society and companies new perspectives and many advantages, for example to completely reshape jobs and key industries

However, amidst the global spread of AI in business and everyday life, concerns are also emerging about its ethical use and risks. In the global study "Trust in Artificial Intelligence", three out of five respondents express reservations about AI systems, and 71 per cent expect regulatory measures.

In response, the European Union (EU) has made significant progress with a provisional agreement on the ground-breaking "Artificial Intelligence Act" (AI Act), which will set a new global standard for AI regulation. The Act came into force in March 2024, meaning that most AI systems will have to comply with the Act's requirements by 2026. The AI Act takes a risk-based approach to protect fundamental rights, democracy, the rule of law and environmental sustainability.

 

The EU's AI Act aims to strike a balance to encourage the adoption of AI while safeguarding the rights of individuals to use AI responsibly, ethically and trustworthily.

AI holds the great promise of expanding the horizons of what is achievable and changing the world for our benefit. Dealing with the risks and known and unknown negative consequences of AI will be crucial. The AI Act was passed in March 2024 and aims to ensure that AI systems are safe, respect fundamental rights, encourage AI investment, improve governance and promote a harmonised EU single market for AI.

The definition of AI contained in the AI Act is broad and covers various technologies and systems. As a result, organisations are significantly affected by the AI Act. Most obligations will come into force at the beginning of 2026. However, banned AI systems must be withdrawn from use no later than six months after the AI Act comes into force. The rules for general AI will come into force at the beginning of 2025.1

The AI Act follows a risk-based approach that categorises AI systems into different levels of risk: unacceptable, high, limited and minimal risk.²

High-risk AI systems are permitted, but are subject to the strictest obligations. These obligations apply not only to the users, but also to the so-called "providers" of AI systems. The term "provider" in the AI Act covers the developers of AI systems, including organisations that develop AI systems for purely internal use. An organisation can be both a user and a provider.

Providers will have to ensure compliance with strict standards in terms of risk management, data quality, transparency, human oversight and robustness.

Users of AI are responsible for operating their AI systems within the legal boundaries of AI law and according to the specific instructions of the provider (shared responsibility). This includes obligations in relation to the intended purpose and use cases, data processing, human oversight and monitoring.

New provisions have been added to take account of recent advances in general purpose AI (GPAI) systems, including large generative AI models such as GPT.³ These models can be used for a variety of tasks and can be integrated into a large number of AI systems, including high-risk systems. They are increasingly forming the basis for many AI systems in the EU. To take account of the wide range of tasks performed by AI systems and the rapid expansion of their capabilities, it has been agreed that GPAI systems and the models on which they are based must fulfil transparency requirements. In addition, GPAI models with higher complexity, capabilities and performance will be subject to stricter requirements. This approach will help to mitigate systemic risks that may arise from the widespread use of these models.4

Existing EU laws, for example on personal data, product safety, consumer protection, social policy and national labour law and practice, continue to apply. The same applies to the sectoral legal acts of the European Union. Compliance with the AI Act does not release organisations from their existing legal obligations in these areas.

What you need to do now:

Inventory: Companies should take the time to create an overview of the AI systems they have developed and use.

Categorisation: These AI systems should then be categorised according to the risk levels as defined in the AI Act.

Assessment: If any of the AI systems surveyed fall into the limited, high or unacceptable risk category, you will need to assess the impact of the AI Act on your organisation.

Implementation: Additional measures to reflect appropriate governance and compliance must be implemented for the AI systems concerned in accordance with the AI Act.

 1European Commission. (December 12, 2023). Artificial Intelligence  - Questions and Answers.

²European Council. (December 9, 2023). Artificial Intelligence Act Trilogue: Press conference  - Part 4.

³European Parliament. (March 2023). General-purpose artificial intelligence. 

4European Commission. (December 12, 2023). Artificial Intelligence  - Questions and Answers.