To better regulate artificial intelligence, the EU introduced the AI Act. KPMG's Benny Bogaerts and Laura Vanuytrecht explain these regulations and what companies need to consider.

For years, training technology based on patterns in large data sets was an unattainable dream for many scientists. The term ‘artificial intelligence’ was coined as early as 1956, but it took nearly seventy years for the technology to actually take the world by storm. Today, development is progressing so rapidly that the European Union is introducing regulations to set boundaries on what is and isn’t allowed.

“The AI Act is part of a broader wave of regulations impacting companies,” says Benny Bogaerts, Partner at KPMG Advisory. “It is the first legal framework for AI, through which the EU establishes how we can use it safely and responsibly, without violating fundamental rights or the decision-making rights of individuals. Those who fail to comply with the regulation risk fines of up to 35 million euros or 7 percent of global annual sales.”

The four categories

Artificial intelligence is a very broad technology that can be used for a variety of applications. Not wanting to lump every AI model together, the EU has identified four broad categories.

"The focus is primarily on the potential risk. Systems with an unacceptable risk are applications where the danger is so great that Europe will ban them from February 2025. Think of social credit scores or emotion recognition in the workplace," says Laura Vanuytrecht, Counsel at KPMG Law.

This is followed by high-risk systems for the health, safety, environment, and fundamental rights of individuals. “The strictest obligations apply to this category. These include AI systems or use cases in specific sectors, such as software for critical infrastructure or AI systems used for recruitment,” Vanuytrecht explains. "For systems with limited risk, such as chatbots, a transparency obligation applies: companies must clearly indicate that users are interacting with AI.”

Finally, there is the largest category: low-risk systems, such as spam filters. “There are no binding rules for these, but there are guidelines. Additionally, a distinction is made between companies that develop AI, 'providers,' and companies that use AI or are 'users.' A business can fall into both categories simultaneously," Vanuytrecht stresses.

A multidisciplinary approach

Transparency must be central, emphasizes Bogaerts. “Every company must clearly communicate which AI models it uses and what happens to the data. An AI model should never make decisions on its own, and privacy remains a priority. Moreover, each category within the AI Act comes with specific rules, and AI applications can shift from one category to another over time.”

Additionally, the AI Act does not stand alone, but there is a lot of cross-pollination with other legislation. “Because the AI Act is so broad, it must be approached from a multidisciplinary perspective,” says Bogaerts. “This means involving not only legal and compliance experts, but also specialists in data science. At KPMG, we have developed a framework to establish a strong governance structure.”

Bogaerts outlines some concrete steps that companies must take to correctly implement the new regulations. “First, you need to determine which use cases fall under ‘unacceptable’ or ‘high-risk’. Next, it is essential to evaluate and categorize the risks, including those with third parties such as suppliers. For example, they will need to demonstrate that they are actively focusing on safety. Finally, awareness around AI is crucial, both internally with employees and externally with partners and stakeholders.”

The coming months will be crucial, according to him, as they will bring much clarity. “With other European legislation, such as the GDPR, there was less room for interpretation. There are still many questions about the AI Act, but soon the impact of the new regulation will become increasingly concrete.”

 

This article was created in collaboration with De Tijd and L'Echo.