cancel

UK FS regulators publish their AI strategies

No new regulations needed

cube-accumulation-animation

UK regulators have published their strategies for the regulation of artificial intelligence (AI) within financial services. Overall, they have welcomed the government's sector-led and innovation-friendly approach. No new regulation is proposed, with both the BoE/PRA and FCA determining that they already have appropriate frameworks in place to support the government's principles. However, they have acknowledged that this will need to be kept under review given the rapid growth in deployment of AI within financial services. As a result, firms should take action now to ensure that their AI risk management tools are fit-for-purpose and fully incorporate the requirements that have been identified.

Overview of approach

The UK government published its 'pro-innovation approach to AI regulation' White Paper in March 2023 and its response in February 2024 [see more in our previous articles here and here]. In this response, the FCA and BoE/PRA along with other identified regulators, were asked to issue their own plans for AI regulation by end-April. 

Overall, the BoE/PRA and FCA have determined that their existing frameworks remain appropriate to address the risks posed by AI, as these risks are `not unique'. Specifically, the regulators plan to lean on tools such as their Principles for Business, Consumer Duty, Operational Resilience rulebook, Model Risk Management Principles, and Senior Managers and Certification Regime (SM&CR). Not only will these operate as guardrails, but, as many of the tools are outcomes-focused, the regulators' view is that this will be proportionate and allow firms the flexibility to adapt and innovate in a safe manner.  

That said, the BoE/PRA and FCA have emphasised that their technology-agnostic approach does not mean that they are `technology blind', and they will continue to monitor firm deployment to determine whether any amendments to their frameworks become necessary. For example, the approach will need to be reconsidered if it curtails, rather than promotes innovation or if it doesn't sufficiently protect consumers from intentional, or unintentional, harm.

The regulators have also stressed that AI should not be considered in isolation — and that the best regulatory approach requires consideration of wider technology trends (e.g. cyber security, quantum computing and data).

These strategies build on previous publications by the BoE/PRA and FCA including their joint AI Discussion Paper (October 2022) — and corresponding Feedback Statement (October 2023), the AI Public-Private Forum (AIPPF) final report (February 2022) and their 2019 & 2022 Machine Learning surveys. 



Mapping against government's AI principles

In its original White Paper, the government outlined five cross-cutting principles that regulators should fold into their remits (below). In their published strategies, the regulators have now outlined how these principles can be addressed through their existing frameworks:

Safety, security, and robustness

The regulators intend to lean on their overarching Principles for Business (e.g. the FCA's Principle 2 which requires firms to `conduct business with due skill, care and diligence'), Threshold Conditions and Senior Management Systems and Controls Sourcebooks (SYCS). At a more granular level, the Operational Resilience/Third Party Risk Management and Outsourcing frameworks are also relevant — where the onus is put on firms to manage any risks from suppliers that could impact important business services. 

Appropriate transparency and explainability

The PRA references its Model Risk Management Principles (whereby consideration of the explainability and transparency of models is explicitly required). The FCA plans to account for this via the Consumer Duty (which requires `honest and open dealings with retail customers') and Principle 7 of its Principles for Business (requiring `due regard for information needs') in instances where the Consumer Duty does not apply. UK GDPR rules may also come into play in relation to the fair and transparent processing of personal data. 

Fairness

This is mainly addressed by the FCA's Consumer Duty (e.g. finalised guidance specifically opposes firms using AI in a way that amplifies bias) and other Guidance on the fair treatment of vulnerable customers. The FCA also references its Principles for Business — e.g. Principle 6 (due regard to interest of customers), Principle 7 (communicating information clearly), Principle 8 (managing conflicts of interest) and Principle 9 (suitability of advice) — as well as its Consumer Protection handbook chapters.                                                                   

Accountability and governance

Both regulators note that requirements are principally covered by the SM&CR. Currently, there is consensus that a bespoke SMCR AI role is not needed, and this responsibility could be incorporated into existing functions e.g. the Chief Operations function (SMF24) and Chief Risk function (SMF4). Both regulators also reference sections of their rulebooks that outline governance and organisational requirements — while the FCA also specifically calls out Consumer Duty, and the PRA specifically calls out Model Risk Management expectations.

Contestability and redress

This is mainly addressed through the FCA's complaints handling framework. However, wider protections are also available through the Financial Ombudsman Service or Financial Services Compensation Scheme (where applicable). 


Future work

The regulators propose to: 

  1. Run the third instalment of the Machine Learning in UK financial services survey.
  1. Consider a follow-up industry consortium to the AIPPF.
  1. Consider opportunities to pilot new types of regulatory engagement e.g. an AI sandbox.
  1. Build on established frameworks
  1. Consider options to address the current fragmented regulatory landscape for data governance and management.
  1. Continue collaborating with other regulators and firms to further monitor and understand AI deployment in financial markets
  1. Continue to analyze the implications of technological developments more broadly beyond AI/ML
  1. Continue participating in international initiatives

What does this mean for firms?

It's hard to overstate the potential capabilities of AI for a sector as vibrant and diverse as UK financial services. 

The BoE/ PRA and FCA's strategies both leverage existing regulatory parameters, with enough flexibility for different providers to develop innovative use cases that meet their customers' specific needs. This UK approach — which is integrated and outcomes-based — differs from other jurisdictions (notably, the EU) which have built bespoke and relatively prescriptive rulebooks. Rather than creating new (and potentially siloed) capabilities, UK firms are being prompted to account for AI risk within their existing risk management structures. 

On the flip side of this however, UK regulators' use of higher-level `outcomes', leaves the bulk of navigating practical implementation to the firms themselves. As such, there is no time to waste — and firms should act now to ensure they are meeting expectations and accounting for AI risks throughout their end-to-end business models.


How KPMG in the UK can help

KPMG in the UK has experience implementing AI solutions both internally and externally. KPMG professionals' extensive technical, operational and risk experience can help accelerate your ambitions in this space. This can be done by helping you define your AI strategy, governance, risk and control frameworks, and use case definition — right through to working with you to build effective technology solutions that are designed to add value to your business, underpinned by effective security, data and technology controls.

If you would like to discuss, please get in touch.


Related Content

Cyber Security Services

Cyber security is more than a technology issue – it’s a golden thread that runs throughout your business, enabling it to operate effectively, efficiently, and securely. Our Cyber experts can help you to protect your future.

Cybersecurity considerations 2024

Technology innovations demand strategic pragmatism.

Regulatory Insight Centre Subscription

Sign up for the latest regulatory insights shaping the future of financial services – delivered straight to your inbox.


Our People


Bronwyn Allan

Manager, Regulatory Insight Center

UK

Kate Dawson

Wholesale Conduct & Capital Markets, EMA FS Regulatory Insight Centre

KPMG in the UK

Douglas Dick

Head of Emerging Technology Risk

United Kingdom

Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat