cancel

Setting the ground rules: the EU AI Act

Understanding the regulatory landscape and preparing for the AI future

Blue violet swirls banner

May 2024

A new framework

On 13 March 2024, the European Parliament formally approved the EU AI Act, the first comprehensive artificial intelligence (AI) legislation passed by any major jurisdiction in the world.

As a recent KPMG paper outlined, the AI Act’s focus is on protecting safety and fundamental rights. It introduces a tiered system of regulatory requirements for different AI applications, based on their level of risk. While many AI systems will be left essentially unregulated, those considered high risk will be subject to stringent safeguards — and those deemed contrary to European values will be largely prohibited.

For banks, the most significant element of the Act is the designation of AI credit scoring systems as high risk, on account of the potential for unfair discrimination against individuals or groups. (An analogous provision classes AI systems for pricing health or life insurance policies as high risk too.) Such AI systems must meet high standards of robustness and accuracy, must operate within a strong risk management framework, and must be designed to ensure human oversight and proper understanding of their outputs. These requirements will apply to new systems deployed from two years after the AI Act takes effect.

AI supervision and compliance

The AI Act recognises that banks and their credit models are already heavily regulated. So banks can satisfy many of its obligations by complying with existing regulatory requirements on model risk management and governance.

Supervision, however, may be complicated by the multi-faceted institutional architecture the AI Act establishes. For most industries, AI oversight (including checking that providers of high-risk AI systems have obtained the necessary safety certification before deployment) will be the responsibility of new national AI authorities. However, for financial services firms, European Union (EU) countries can allocate this task either to their AI agency or to existing national financial supervisors. Meanwhile the European Central Bank (ECB), supervisor of Europe’s significant institutions, has no role in supervising AI Act requirements — but will continue to scrutinise credit models from a prudential perspective.

This complex regulatory architecture raises the possibility that banks using AI-powered credit models will find the same models being supervised by multiple national and European bodies, with potentially very different cultures and core expertise. The various authorities involved should therefore coordinate their activities effectively to avoid imposing duplicative or even contradictory requirements on firms, and to ensure a consistent supervisory approach to AI across Europe.

Challenging the model for the model risk

The advent of AI may also require a more general shift in approaches to model risk management and supervision. Hitherto, the primary focus has typically been on ensuring model soundness ex-ante, via careful model design, validation and backtesting. Once risk controllers and supervisors have been satisfied with a model’s robustness, first line staff have generally been able to rely heavily on its outputs in making decisions such as loan approvals.

AI models’ much greater complexity, and capacity for self-engineering, however may challenge this approach. A system that employs vast datasets and chooses its own parameters may be much more difficult to validate, while the value of ex-ante approval will decline as a model learns and adjusts the statistical relationships it uses to produce its outputs. These features of AI may require greater emphasis on ex-post risk management, to ensure banks can properly interpret — and where necessary, challenge — their AI systems’ outputs before using them to make business decisions.

The AI Act recognises this dynamic, in its requirements for high-risk models to be designed and documented so as to allow their outputs to be properly interpreted by users, and in the ‘AI literacy’ requirement for firms to ensure their staff have sufficient expertise to use AI systems appropriately. At the supervisory level, Chair of the supervisory board of the ECB Claudia Buch similarly said in a recent interview that the ECB expects banks to demonstrate that they do not just ‘blindly’ follow AI systems’ recommendations when making decisions.

Implications for banks

The adoption of the AI Act is a key milestone in Europe’s adoption of a technology that will have a profound — perhaps revolutionary — impact on our economy and society. The AI Act sets the ground rules for the development and deployment of AI solutions to deliver higher quality services and greater efficiency, to the benefit of customers across industries, including in financial services. Now that the Act is in place, banks should take three key steps to prepare for the AI future:

1. AI strategy
Banks should develop a comprehensive AI strategy, identifying priority use-cases and the financial and human resources needed for successful implementation. This will help position banks to derive the greatest benefit from the new technology.

2. Governance framework
Banks should establish a coherent governance framework to help ensure their AI systems are trustworthy, free from bias and explainable to both regulators and customers. They should also provide appropriate human oversight and responsibility for decisions. Strong AI governance is important not only to meet regulatory requirements, but also to maintain public acceptance.

3. Regulatory engagement
Banks should proactively discuss their AI implementation plans with regulators and supervisors. This will both inform financial and AI authorities of the practical implications and help ensure banks fully understand regulatory and supervisory expectations.

In assessing the implications of AI, banks should also remember to consider not only the new AI regulators but also their current financial supervisors, including the ECB. In a recent article ECB Supervisory Board member Elizabeth McCaul wrote that supervisors would not “dictate” what technologies banks use, but emphasised that the ECB’s core task is “to ensure banks remain safe and sound.” The ECB will continue working to understand more fully both the opportunities of AI and risks – which they will expect banks to manage appropriately. As they explore the potential to use AI in their business, banks should closely monitor how the ECB’s thinking develops once the AI Act is in force.

Quarterly KPMG SSM Insights Newsletter – May edition

Welcome to KPMG’s first SSM Insights Newsletter of 2024. This year will see the SSM celebrate its 10th anniversary. It was in November 2014 that the ECB took over direct supervision of the Euro area’s significant institutions, marking the establishment of the first pillar of the banking union.

Related Content

Decoding the EU Artificial Intelligence Act

Understanding the AI Act’s impact and how you can respond.

KPMG European Central Bank Office - Advisory Services

KPMG ECB Office offers you information and solutions for dealing with the ECB supervisory approach under the Single Supervisory Mechanism (SSM).

Revised ECB Guide to internal models

Three key impacts for banks and what to expect going forward


Our people

Benedict Wagner-Rundell

Senior Manager

KPMG in Germany

Matthias Peter

Partner, Financial Services

KPMG in Germany


Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat