What is the AI Act?

In March, the European Parliament formally adopted the first horizontal and standalone AI Framework on Artificial Intelligence, known as the Artificial Intelligence Act (“AI Act”).

While the EU has been somewhat cautious about the implementation of Artificial Intelligence (AI) in some respects, the European Parliament has now agreed upon an extensive framework. This framework is set to regulate both the development and use of AI.

The AI Act aims to strike a balance between fostering AI adoption across member states and sustaining individuals’ rights, including human rights and the right to responsible, trusted, and ethical AI.

What are the risk categories?

The AI Act divides artificial intelligence into three risk classes: “unacceptable,” “high,” and “low/minimal.”

Placing on the market, putting into service, or using AI systems that pose an unacceptable risk is prohibited. These include AI systems designed to subliminally influence human behavior adversely and those that exploit the weaknesses of vulnerable individuals.

Additionally, the use of AI systems by public authorities to assess or classify the trustworthiness of natural persons (“social scoring”) is prohibited. AI systems may not, in principle, be used for real-time biometric remote identification of natural persons in publicly accessible spaces for law enforcement purposes.

AI systems that pose a high risk to the health and safety or fundamental rights of natural persons are referred to as “high-risk AI systems.” These fundamental rights include human dignity, respect for private and family life, protection of personal data, freedom of expression and information, and freedom of assembly and association.

Unless AI systems are deemed unacceptable and classified as high-risk AI systems, they fall into the low/minimal risk category. These systems are subject to less stringent requirements. However, providers of such systems should still establish codes of conduct and be encouraged to voluntarily apply the regulations for high-risk AI systems. Additionally, the EU AI Act requires that even low-risk AI systems must be safe if they are placed on the market or put into service.

Given the broad definition of AI in the AI Act, it is expected that most AI systems will need to be compliant once the law comes into force.

Who is affected?

The AI Act has extensive legal reach and significant impact on those involved. It will affect both providers (developers) and deployers (users) of AI systems. Employers, considered as deployers of AI, will also be subject to the obligations outlined when implementing systems in the workplace.

Additionally, national testing regimes (known as sandboxes) will be established, along with codes of practice to ensure proper compliance with regulations applicable to General Purpose AI Systems.

Existing Union laws, as well as Union sectoral legislative acts concerning product safety, will remain applicable. Compliance with the AI Act will not exempt organisations from their pre-existing legal obligations in these areas.

When does it entry into force?

The AI Act is slated to be implemented incrementally over time. Instead of a single official date, a series of dates will introduce various regulations and controls.

The regulation is expected to be published in the Official Journal of the EU, marking its entry into force. This publication is anticipated to occur between May 2024 and July 2024. Following publication, the AI Act will become applicable 20 days later. Therefore, it is expected to be officially enforced between late 2024 and the summer of 2027. The EU will over the course of the years ensure the development and adoption of secondary legislation and guidelines that must be implemented by organisations and public authorities in the member states.

AI System definition

An AI system is a machine-based system designed to operate with varying levels of autonomy. It may exhibit adaptiveness after deployment and, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.


What happens now?

Following the adaptation of the AI Act, several steps will take place prior to absolute entry of the act. The most vital steps are highlighted below.

Within six months of entry into force
Within the first 6 months, AI systems with an unacceptable risk will be prohibited. Examples of such systems are those which deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting behavior (Article 5).


Within nine months of entry into force
Finalisation of Codes of Practice for General Purpose AI (GPAI) which places emphasis on industries involved in these matters. This necessitates documentation that demonstrates clarity and transparency, outlining comprehensive information on risk management and training protocols. All documentation must be concise and transparent, including details on the technical architecture of AI systems.
Within twelve months of entry into force
The GPAI (General Purpose AI) rules will be enforced, requiring Member States to propose administrative fines processes and establish competent authorities collaborating with the EU AI office. The office will conduct annual reviews of the AI Act's functionality, including potential prohibitions.

Within 18 months of entry into force
The EU intends to offer notes of guidance detailing and categorising High-Risk AI Systems as specified in Article 6.



Within 24 months of entry into force
Member states must have established a minimum of one national AI regulatory sandbox.




Within 36 months of entry into force
The EU will establish obligations concerning High-Risk AI systems, particularly in the areas of biometrics, critical infrastructure, education, access to essential public services, law enforcement, immigration, and administration of justice. Additionally, the EU will conduct reviews and consider amendments to the list of High-Risk AI systems.

What should organisations do?

Despite AI being widespread across industries, early adoption is often observed in sectors such as healthcare, financial services, and retail.

Nevertheless, it is advisable for all organisations, regardless of industry, to develop a comprehensive inventory of the AI systems they develop and employ. These systems should be categorised based on their risk levels as defined in the AI Act. If any AI systems fall into the limited, high, or unacceptable risk categories, organisations must conduct an assessment of the implications of the AI Act for their operations. Promptly grasping this impact and determining an appropriate response is crucial.

For more information on addressing these challenges, organisations can contact KPMG or click below for insights on navigating the AI Act.  



Contact us