On 1 August 2024, the European Artificial Intelligence Act (AI Act) entered into force. The AI Act is widely considered to be the world’s first comprehensive legal framework on AI. It aims to manage and mitigate the risks AI may pose to society, without creating an unnecessary burden that could hinder innovation. 

The legislative framework establishes various obligations and responsibilities for the different actors in the AI value chain including for providers, deployers, importers, operators and distributors of AI systems. Most companies will qualify as deployer - those using AI systems in a professional capacity under their authority, or as provider - when developing AI systems from scratch or based on an existing AI model. Both public and private actors fall under the legislative framework.

Generally speaking, and subject to some exceptions, the AI Act applies to all AI systems that are put on the EU market or put into service on the EU market. Given its extra-territorial scope, also companies based outside the EU are affected if they put their AI system on the EU market or put it into service on the EU market. Furthermore, even providers and deployers established or located in a third country (outside the EU) could fall under the scope of the AI Act if the output produced by an AI system is (intended to be) used in the EU.

The AI Act adopts a risk-based approach and divides AI systems into four different risk categories. Depending on the assigned level of inherent risk that the AI system poses to health, safety and/or fundamental rights, different obligations come into play for the different actors in the AI value chain:

  1. The first level of AI systems pertains to those considered to pose an unacceptable risk. Examples include AI systems used for emotion recognition in the workplace and those employing ‘real-time’ remote biometric identification in publicly accessible areas for law enforcement purposes.
  2. The next level relates to high-risk AI systems to which the most stringent obligations logically apply. This category covers AI systems that are (a) safety components of products covered by sectorial EU product safety law and (b) required to undergo a third-party conformity assessment. It includes regulation on machinery, medical devices, safety of toys, vehicles, etc. Secondly, this category covers AI systems used in a specific sector and for specific use-cases. Examples include AI systems intended to be used for the management and operation of critical infrastructures, such as the operation of road traffic or the supply of water, gas, heating, etc. Other examples relate to AI systems intended to be used for recruitment or selection of natural persons or for making decisions on promotion or termination of work-related contractual relationships. Providers of high-risk systems are, amongst other obligations, required to register the AI system in an EU database, maintain a quality management system, provide adequate technical documentation, and enable effective human oversight.
  3. The third category concerns AI systems with a specific transparency risk. The classic examples involve chatbots, deep fakes, etc. The AI Act imposes specific transparency requirements to avoid the risk of deception. When companies are for example providing a chat bot on their website, it needs to be clear to the individual that they are interacting with an AI system and not a natural person or when a company uses deep fakes for example in their marketing, it should be clear that the images were generated by an AI system.
  4. The last category concerns AI systems with a minimal risk. These AI systems are not explicitly regulated by the AI Act, but transparency is nevertheless encouraged. Examples are recommender systems, spam filters, etc.

A separate set of obligations applies to general-purpose AI models (GPAI models). These are AI models that display significant generality and are capable to competently perform a wide range of distinct tasks regardless of the way the models are placed on the market. Furthermore, they can be integrated into a variety of downstream systems or applications. Companies developing GPAI models must maintain comprehensive documentation of their development and testing processes. They are also required to share relevant information with other companies who use their AI, while safeguarding intellectual property.

The AI Act implements a sanction/penalty mechanism that closely resembles the one introduced in the GDPR. However, it is important to note that the (maximum) penalties under the AI Act are higher than those under the GDPR and, in certain cases—when the AI system also processes personal data—can be accumulated. Non-compliance with practices regarding AI systems with unacceptable risks is subject to a fine of up to 35 million EUR or 7% of a company’s global annual turnover of the preceding financial year, whichever is higher. Other violations can result in fines up to 15 million EUR or 3% of a company's global annual turnover. Providing incorrect or misleading information can result in fines up to 7.5 million EUR or 1% of a company's global annual turnover. Importantly, SMEs will receive lower fines guided by their size, interest, and economic viability.

Under the phased implementation of the AI Act, a first milestone occurs on 2 February 2025. By then, AI systems with an unacceptable risk must be banned from the EU market. Following which the following dates are important to take into account:

  • by 2 August 2025, the obligations and penalties related to GPAI models become applicable;
  • by 2 August 2026 all rules of the AI Act become applicable, including obligations for high-risk AI systems used in specific sectors for specific use cases;
  • by 2 August 2027, the obligations for high-risk AI systems that are safety components of products or are required to undergo a conformity assessment come into effect.

Given the horizontal and cross-sectoral nature of the AI Act, it is crucial to interpret it in conjunction with other legal frameworks, such as the GDPR, regulations on intellectual property rights, the NIS-II Directive, the Cyber Resilience Act, the Digital Markets Act, and the Digital Services Act, among others.

KPMG Law can assist in evaluating the risk level of AI systems developed or deployed by you, your role in the AI value chain and the different obligations and responsibilities under the AI Act, as well as guide you on the interplay with the various other legal frameworks that come into play.