Artificial Intelligence (AI) is revolutionizing the way organizations operate. It has the potential to streamline internal processes, increase operational efficiency, and uncover new patterns. With that, it can have unprecedented economic, environmental, and societal benefits across the entire spectrum of industries. At the same time, however, AI may generate risks and cause harm to public interests and fundamental rights. A recent example is the ‘Toeslagenaffaire’ – the Dutch political scandal whereby the Dutch tax authority wrongly accused thousands of parents of fraudulent claims for child benefits based on a risk assessment performed with the use of AI.

To avoid such events in the future, and to promote the uptake of human-centric and trustworthy AI, the EU Commission recently introduced the EU AI Act. The EU AI Act introduces a legal framework for the safe and responsible use of AI, while at the same time fostering innovation and economic prosperity.

In this blog post we will highlight the most important aspects of the EU AI Act and provide strategic insights to help you navigate the EU AI Act and efficiently address the intersection between the EU AI Act and overlapping laws and regulations.

The EU AI Act: A Closer Look

Applicability

The EU AI Act applies to various actors (‘operators’) from the AI value chain. It applies to developers of AI systems/models (‘providers’), including parties that fine-tune underlying AI models, users of AI systems (‘deployers’), importers and distributors of AI systems/models, and product manufacturers that integrate AI systems into their product. The EU AI Act applies to AI systems that are placed on the market or put into service in the EU, users of AI systems that are located within the EU, and providers and deployers of AI systems established outside the EU, where the output from the AI system is used in the EU.

The EU AI Act is sector agnostic and applies to both the private and public sector. Certain AI systems, and certain AI use cases are exempted from the EU AI Act, such as AI systems used for scientific research and development, or AI systems exclusively used for military purposes. Most AI released under a free and open-source license is mainly exempted from the EU AI Act.

Risk-based approach

To ensure a proportionate and effective regulation of AI, the EU AI Act uses a risk-based approach whereby AI systems are categorized based on the risk the AI system poses to people’s health, safety, and the fundamental rights. The bigger the risk, the more stringent the obligations. The EU AI Act distinguishes between the following risk categories:

  • Prohibited AI practices: AI practices that are so harmful and abusive that they are forbidden under all circumstances. Examples are behavior manipulation through subliminal techniques, exploitation of people’s vulnerable characteristics, and social scoring practices.
  • High-risk AI systems: AI systems that pose a significant risk to the health, safety, and fundamental rights of people. The majority of the EU AI Act obligations apply to high-risk AI systems to ensure that they are reliable, technically robust, resilient, safe, explainable, non-discriminatory, fair, and accountable.
  • Limited-risk AI systems: AI systems intended to interact with natural persons or to generate content that may pose a risk of impersonation or deception. Limited-risk AI systems are subject to certain transparency requirements such as the requirement to disclose to users of the AI system that they are not interacting with a human being but with an AI system (chatbot).
  • Low-risk AI systems: All remaining AI systems. The EU AI Act does not include any obligations for low-risk AI systems. Operators of such AI systems may, however, sign voluntary codes of conduct that regulate the use of such AI systems.

In addition to categorizing and introducing obligations in relation to AI systems based on their level of risk, the EU AI Act imposes obligations on developers of so-called ‘general-purpose AI models’. ‘General-purpose AI models’ are AI models that underlie AI systems that can perform many different tasks. Examples of such AI systems are ChatGPT (OpenAI), Llama (Meta), and Gemini (Google).

Obligations for high-risk AI systems

Although the EU AI Act introduces obligations for several types of AI systems, it is mostly aimed at regulating ‘high-risk AI systems. Organizations (operators) working with such AI systems must meet a series of stringent obligations that aim to ensure that the AI system is developed and used in a safe and responsible way. Examples of obligations are the obligation (for developers of AI systems) to establish a risk management system (with which, among other things, the risks of the AI systems are identified and analyzed), the obligation to use appropriate, unbiased, and representative data when developing AI systems, the obligation to draw up and keep up to date technical documentation about the AI system, and the obligation to ensure the AI system achieves an appropriate level of accuracy, robustness and cybersecurity. Users of AI systems must, for example, ensure appropriate human oversight when using high-risk AI systems, and input relevant and representative data into the AI system.

Timeline

Although the EU AI Act already enters into force, it does not apply before the expiration of the so-called grace period, in which organizations are enabled to ensure compliance with the EU AI Act. However, after expiration of this period, the EU AI Act will start to apply with a staggered timeline:

  • 2 February 2025: Prohibitions on ‘prohibited AI practices’ start to apply.
  • 2 August 2025: Obligations for General-Purpose AI systems start to apply.
  • 2 August 2026: Obligations for certain types of high-risk AI systems (i.e., AI systems that qualify as ‘high-risk AI system’ based on their intended purpose) start to apply.
  • 2 August 2027: Obligations for certain types of high-risk AI systems (i.e., AI systems that are (safety components of) products regulated under European product legislation) start to apply.
  • 2 August 2027: Obligations for ‘limited-risk AI systems’ start to apply.

Penalties

Penalties for non-compliance range from 1% of a company’s annual global turnover for the supply of incorrect or incomplete information (or €7 500 000, whichever is higher), to 7% of the company's annual global turnover for applying prohibited AI practices (or €35 000 000, whichever is higher). Start-ups and small and medium-sized enterprises will be subject to proportional administrative penalties.

Strategic Insights for Organizations

Based on our knowledge and experience with helping organizations with their compliance with and implementation of the EU AI Act, we outline a set of key strategic insights for organizations that (will) deal with AI (in the future):

  • Risk classification and Role: As mentioned, the EU AI Act uses a risk-based approach whereby the organization’s obligations depend on the risk classification of the AI system, as well as on the role the organization assumes. Therefore, before the development, procurement, or use of an AI system, organizations should determine their role and classify the risk of the AI system.
  • Fine-Tuning General-Purpose AI Models: Organizations may fine-tune AI models from third parties, for example, to make the AI model more suitable for certain tasks or to make the AI model smaller, faster, or more efficient. Before doing so, organizations should establish whether the relevant third-party AI model qualifies as a ‘general purpose AI model’, because if this is the case, the organization fine-tuning the AI model must meet certain obligations that apply to developers of ‘general-purpose AI models’, such as the obligation to put in place a policy on how the organization complies with third-party copyrights.
  • Contracts and Liability: Organizations should ensure existing and future contracts appropriately address and allocate liability resulting from the use of AI. This is particularly important for AI that can generate content (Generative AI) as output generated by such AI systems may violate third-party intellectual property rights, while the party providing input to the AI system (prompts) has limited control over the output of the AI system.
  • Proprietary AI and Open-Source AI: In principle, the EU AI Act only applies to proprietary AI – it does, or only partially applies to most free and open-source AI. Therefore, before the development, procurement, or use of an AI system, the organization should determine whether the AI in question qualifies as ‘free and open-source AI’ under the EU AI Act.
  • Intellectual Property (IP) Ownership: Organizations that use AI systems might generate content of which they want ownership, and which is eligible for IP protection. To ensure this, organizations should include clauses in their contracts with which they assert the ownership of the AI system’s output, instead of the AI system supplier.
  • Holistic Approach: The EU AI Act contains obligations that are also included in other (European) laws and regulations. For example, obligations regarding cybersecurity and privacy. Organizations should identify those overlapping areas, and leverage the work done in the context of those other laws and regulations. Examples are Data Privacy Impact Assessments (DPIAs), required in certain cases by the GDPR, and cybersecurity certifications required, among other things, by the Cybersecurity Act.
  • Compliance Support Programs: Small organizations often struggle to navigate the complex web of AI regulations and identify which laws apply to them. This complexity can be overwhelming, particularly for companies without extensive legal resources. To help small organizations understand their regulatory obligations and achieve compliance more efficiently, the European Commission offers informational packages and compliance support programs.

For more guidance on navigating AI regulations and implementing effective compliance strategies, KPMG offers expert consultancy services tailored to your needs. Contact us to learn how we can support your AI journey and regulatory compliance efforts.

* This blog was created in collaboration with AI, demonstrating the efficiency and capabilities of modern technology. A human was actively involved at every step of creation, ensuring relevance, accuracy, and ethical consideration.  This collaboration highlights the potential of AI to enhance our work while emphasizing the critical role of human oversight in maintaining quality and integrity.

Contact