Diverging regulatory approaches for AI

Implications for financial services firms

purple-blue-circular-interface-backend

As interest intensifies around Artificial Intelligence (AI) use-cases (particularly those pertaining to generative AI), governments around the world are hastening to draft their regulatory responses. The EU Parliament is currently fine-tuning proposals for the prescriptive AI Act, which categorises systems by risk and necessitates the creation of a bespoke European AI Board. The UK, on the other hand, is considering a more flexible principles-based approach where responsibility is delegated to existing regulators, and nothing is codified in legislation. Despite both frameworks being wide-ranging and cross-sectoral, there are important implications for financial services specifically.

The EU approach

The EU AI Act was initially proposed by the European Commission in April 2021. Since then, a 'general approach' position has been adopted (in late 2022), and final rules are expected by the end of 2023 or early 2024. This would likely make it the first AI law by a major jurisdiction.

Overall, the Act is relatively prescriptive — categorising systems by risk and prohibiting firms' use of AI in pre-defined "harmful" ways. More specifically, it classifies AI systems into four risk categories:

1) Unacceptable risk (e.g. government-run social scoring) — where these systems are entirely banned 

2) High-risk (e.g. CV scanning, autonomous vehicles) — where these systems are subject to specific legal requirements (e.g. testing, documentation, human oversight)

3) Limited risk (e.g. chatbots) — where these systems are subject to transparency obligations

4) Minimal or no risk — where these systems are largely unregulated 

The majority of the Act focuses on the 21 identified high-risk systems — "systems that can restrict an individual's financial and professional opportunities". Of these, only three currently seem applicable to financial services (and, of these, only the first applies directly): 

i. AI systems used to evaluate a person's credit worthiness, 

ii. AI systems used to monitor and evaluate work performance, and 

iii. AI systems used to recruit staff. 

However, once the Act is in force, the Commission would be responsible for reviewing and amending the list of high-risk systems on an ongoing basis. 

Providers of these high-risk systems and the corporations that use them would have to comply with stringent requirements around risk management, data quality, technical documentation, human oversight, transparency, robustness, accuracy and security. 

In addition, providers would need to complete a conformity assessment against all applicable requirements before the AI systems are put on the market. Providers would then need to register the system (including the declaration of conformity) in the public EU database, which the Commission will set up. They would also need to have an ongoing monitoring system to address any risks that arise down the line.  

Requirements for users would include obligations to use the AI systems according to the providers' instructions, safeguard human oversight, ensure the relevance of the input data, report serious incidents to the AI providers, and keep logs of the AI systems' activities. 

It would be up to individual Member States to designate one or more national competent authorities to enforce the regulation and set out rules on penalties. However, in order to oversee implementation and ensure uniform application across the EU, a new European AI Board would also be established. This Board would be tasked with publishing recommendations (to the Commission) on issues that arise and the lists of prohibited / high risk systems, as well as providing guidance to national authorities. 

Implications for generative AI

In the face of criticism that the Act insufficiently addressed "general purpose" AI or "foundation model" AI — i.e. systems that can be used for a range of purposes with varying degrees of risk — MEPs recently proposed some amendments. These include:

  • Making it mandatory to identify and mitigate reasonably foreseeable risks (with the support of independent experts)
  • Requiring providers to ensure their models achieve appropriate levels of performance, predictability, interpretability, safety and cybersecurity — especially as these models often serve as building blocks for other downstream systems 
  • Requiring providers to provide substantial documentation and usage instructions — to ensure downstream stakeholders are able to comply with any relevant regulatory requirements 

Moreover, generative foundation models — like Chat GPT — would also have to comply with additional transparency requirements including: 

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

The UK approach

In comparison, the UK has so far opted for a much more flexible and principles-based approach — as laid out in its March 2023 White Paper

Rather than targeting particular technologies or activities, the UK government proposes to adopt a context-specific approach where AI is regulated based on the outcomes it is likely to generate in certain applications. In other words, they will build on existing regimes and empower regulators to apply a set of five cross-cutting principles to the existing regulatory framework:

1) Safety, security and robustness

2) Appropriate transparency and explainability

3) Fairness

4) Accountability and governance

5) Contestability and redress

In contrast to the EU, no dedicated regulator would be established to deal with AI. Instead, existing regulators (i.e. the FCA, PRA and Bank of England (BoE) in financial services) would fold AI into their current remits — as they are best-equipped with the relevant domain-specific expertise. See a summary of the latest AI discussion paper from the BoE and FCA in our article here.

Initially, these principles will be issued on a non-statutory basis, allowing the discretion of the regulators to prioritise according to the needs of the sectors they regulate.

Following the initial period, when parliamentary time allows, the government proposes to introduce a statutory duty requiring regulators to have "due regard" to the principles. The hope being that this should allow regulators the flexibility to exercise judgement while also strengthening their mandate. However, this duty will not be introduced if monitoring shows that implementation has been effective without the need to legislate.

A number of central support functions (e.g. monitoring the framework's effectiveness, conducting horizon scanning, promoting interoperability with international frameworks, supporting testbeds etc) will be provided from within the government itself. 

The White Paper also notes that legal responsibility for compliance with the five principles should be allocated to the actors in the AI lifecycle best able to identify, assess and mitigate risks effectively. As described above, it is unlikely that UK financial regulators will intervene with new specific rules on AI. Instead, they are expected to leverage existing frameworks such as Consumer Duty and the Senior Manager and Certification Regime (SMCR) to ensure firms themselves take on the onus of appropriate use of (and governance over) these models. More recently, there have even been suggestions in the UK Parliament for the creation of a bespoke AI SMCR regime — rather than relying on existing defined roles — but this is yet to be debated. 

Implications for generative AI

Unlike the EU AI Act, the White Paper mentions generative AI only sparingly — with two key takeaways:

  • The government plans to clarify the relationship between intellectual property law and generative AI to provide "confidence to businesses"
  • The government plans to establish a regulatory sandbox for AI which is eventually supposed to be expanded to generative AI models

What this means for FS firms

As AI adoption continues to increase — and in the absence of any formal legislation or regulation  — all firms, including those within financial services, should proactively begin adapting their risk frameworks now. Our previous article summarises guidance that financial service regulators have issued so far to date. In particular, the easy accessibility of generative AI tools brings a heightened risk that employees of firms may use these tools in an inappropriate and uncontrolled way. Given the regulators' emphasis on Senior Managers' responsibility, firms should give this particular attention.

In regard to the two jurisdictions in question, financial services firms within the UK should keep abreast of all consultations being issued by the FCA and PRA, as they will be charged with the regulation of AI for their sectors. 

Financial services firms across the Channel, however, will not be able to rely solely on their sector-specific regulators. Instead, once the EU AI Act is in force, they will need to monitor the Commission's evolving list of high-risk AI systems (alongside all other guidance from the AI Board) to ensure they are complying with any consequent obligations.

Firms operating across both regions will need to begin planning how they can best navigate the two diverging approaches. 

If you have any questions or would like to discuss any elements of these proposed AI requirements, please get in touch. KPMG firms are well equipped to support clients in this evolving area.

How KPMG in the UK can help

KPMG in the UK has experience of advising businesses on integrating new technology into their operations, including developing AI integration and adoption plans. Our technology teams can provide expertise and build out test cases, while our risk and legal teams can support with designing and implementing control frameworks. 

If you have any questions or would like to discuss any matters concerning AI, please get in touch.

Related Content

Regulatory Insights

Providing pragmatic and insightful intelligence on regulatory developments.

Digital Finance

The digitalisation of the financial sector continues.

Get in touch

Bronwyn Allan

Manager, Regulatory Insight Center

KPMG-UK

Kate Dawson

Wholesale Conduct & Capital Markets, EMA FS Regulatory Insight Centre

KPMG in the UK

Leanne Allen

Partner - FS Consulting Technology and Data, Data Science & AI capability Lead

KPMG in the UK

Chris Steele

Partner, Banking Risk and Regulation

KPMG in the UK