Until now, the maturity of organizations for creating AI applications existed on a continuum. In this blog, we show that the proposed AI Act will install a maturity threshold, dividing the field into two groups of organizations: those that can comply with regulations and can still develop AI systems themselves, and those that cannot. We point out the elements of the AI Act proposal that will most obviously be problematic, given common practice in low-maturity organizations. 

What is the Artificial Intelligence Act?

In recent years, Artificial Intelligence (AI) has benefited from colossal investment – reaching the $500 billion mark worldwide in 2023[1] – which has led to major breakthroughs and the development of foundation models like ChatGPT. The potential contribution of AI to the global economy is estimated to reach $15.7 trillion in 2030.[2] The ever-increasing use of AI in diverse areas such as healthcare, financial services, and retail has emphasized the need for controlling potential risks and abuses, leading to the development of AI-specific legislative and regulatory frameworks. With its Artificial Intelligence Act (AI Act), the European Union aims to be a front-runner in this regard.

The proposal for the AI Act[3] defines AI systems as "software that can generate results that influence its environment, and that is created by machine learning, logic, and knowledge-based or statistical approaches." Given this very wide definition (which is still expected to change), the categorization of AI systems according to their risk level is vital, as is the application of the differentiated requirements. It seems that many of the impactful systems will be ‘high risk’ (i.e., representing a risk to the health and safety or fundamental rights of natural persons), which will lead to a slew of requirements. These are the ones we will focus on below.

Pitfalls organizations might struggle to avoid

While there is no final regulation yet, some of the legislator’s intentions are already clear. In this blog, we connect those intentions regarding high-risk AI systems to some of the most common struggles faced by AI teams in practice. Our observations are based on direct experience through KPMG’s practice of auditing, validating, and building AI systems and teams. By connecting the two, we believe it is already possible to identify the operational issues that are mostly likely to cause headaches (or perhaps even fines) under the upcoming legislation.


Once this new European regulation comes into force, proper data governance principles must be defined for data sets used in the development of AI solutions. This applies to data set design, data collection, data preparation (cleaning, enrichment, labelling), and data quality, especially regarding potential bias. This formal governance of data has been a big topic in the data community, but many companies have not fully organized themselves yet. Furthermore, in a first phase of experimenting with AI, companies tend to not spend much attention to this, creating an informal culture that may persist well into the future, leading to later non-compliance with the intentions of the AI Act. Currently, when we audit data science teams, insufficient formal governance is one of the most common observations.



The regulation aims to install a minimum of best practice and emphasizes the use of appropriate testing procedures: based on train-test splitting, with formalized metrics and probabilistic thresholds to ensure AI solutions achieve their intended purpose and compliance to best practice during their development phase. In our experience of validating AI solutions, in realistic settings the creation of sufficient train-test splitting is difficult even for expert data scientists (e.g., in time series or hierarchically organised data), and data scientists also regularly fall short of sufficiently benchmarking their solution against practically relevant metrics.



Expanding on the previous struggle, the proposal requires the establishment, implementation, and maintenance of a risk management system that consists of iterative identification and analysis of known and foreseeable risks that could come from both intended usage and misuse and pre-tested mitigation measures to cope with identified risks. For organizations just starting with AI, this can again be problematic, since creating and using good risk frameworks requires maturity, while immature risk frameworks can be ineffective, and even smother development and innovation. While KPMG typically employs very rich risk frameworks, integrated in a tight governance structure, in our experience, many organizations fail to have even a limited set of essential risk management reflexes.



The proposal also looks beyond the phase of creating AI systems, and forces AI providers to set up, implement, and maintain a post-market monitoring system, in accordance with the nature of technologies and risks of their AI systems. This system must collect and analyze relevant information provided by users throughout the AI system’s lifetime, to assess its continuous compliance with regulations. This addresses the long-term risks of AI and is another problem that many companies currently fail to adequately address. Even monitoring of the value of an AI system (which is necessary considering things like data and concept drift, as well as model degradation) is often insufficiently done, let alone a monitoring of the risks of such systems. While many root causes are frequently seen, one of the most common we encounter is that organizations fail to re-organize their activities around their AI solutions. This often leads to two disconnected governance approaches: one in which expert data scientists create AI systems, and one in which expert business users follow up – a separation that leads to both less-effective solutions, and ineffective post-deployment monitoring.



Finally, we want to draw attention to the obligation for producers of AI systems to provide transparency by documenting and keeping up-to-date, scrupulous, technical information. This includes a general description of the system’s intended use, its interactions with external software and hardware, how it is developed, the risks involved, its EU declaration of conformity, as well as its post-market monitoring and risk management systems. This emphasis on documentation with a wide scope is not always common and requires collaboration between profiles involved in different phases of the AI lifecycle, or at least an overarching governance, which in turn requires organizational maturity. We often see that companies lack this maturity, leading to a lack of documentation standards.


Impact on AI development landscape

The common deficiencies in current practice, combined with the above requirements, are real sources of concern. These are all root causes for observed problems with AI systems. In this respect, the upcoming AI Act seems to hit the bullseye, and much like with the GDPR in the past, the EU has done a decent job at creating a forward-looking regulation that allows room for innovation. Nevertheless, by defining a clear playing field, the challenge to organizations is also clear. In the future, creating impactful AI systems will require maturity and risk-awareness. This will put some types of organizations at a disadvantage: early-stage companies often lack the emphasis on governance, and smaller experimentation will either be limited in scope (e.g., limited to Proof of concepts) or will require organizations to move beyond technical experimentation and immediately tackle a broad range of organizational developments. Also, while financial institutions have a strong historic reflex to organize their activities around risks, other types of organizations have had less need to develop this and may need to evolve.

All in all, we believe that the AI Act will address important risks efficiently, but considering the severe fines (currently up to 30 million euro or 6% of annual turnover), it may end up creating a division between organizations: those who are mature enough to develop high-risk AI systems, and those who are not and will no longer be able to do it.”

 

Author: Harold Tellier