UK regulators have published their strategies for the regulation of artificial intelligence (AI) within financial services. Overall, they have welcomed the government's sector-led and innovation-friendly approach. No new regulation is proposed, with both the BoE/PRA and FCA determining that they already have appropriate frameworks in place to support the government's principles. However, they have acknowledged that this will need to be kept under review given the rapid growth in deployment of AI within financial services. As a result, firms should take action now to ensure that their AI risk management tools are fit-for-purpose and fully incorporate the requirements that have been identified.
The UK government published its 'pro-innovation approach to AI regulation' White Paper in March 2023 and its response in February 2024 [see more in our previous articles here and here]. In this response, the FCA and BoE/PRA along with other identified regulators, were asked to issue their own plans for AI regulation by end-April.
Overall, the BoE/PRA and FCA have determined that their existing frameworks remain appropriate to address the risks posed by AI, as these risks are `not unique'. Specifically, the regulators plan to lean on tools such as their Principles for Business, Consumer Duty, Operational Resilience rulebook, Model Risk Management Principles, and Senior Managers and Certification Regime (SM&CR). Not only will these operate as guardrails, but, as many of the tools are outcomes-focused, the regulators' view is that this will be proportionate and allow firms the flexibility to adapt and innovate in a safe manner.
That said, the BoE/PRA and FCA have emphasised that their technology-agnostic approach does not mean that they are `technology blind', and they will continue to monitor firm deployment to determine whether any amendments to their frameworks become necessary. For example, the approach will need to be reconsidered if it curtails, rather than promotes innovation or if it doesn't sufficiently protect consumers from intentional, or unintentional, harm.
The regulators have also stressed that AI should not be considered in isolation — and that the best regulatory approach requires consideration of wider technology trends (e.g. cyber security, quantum computing and data).
These strategies build on previous publications by the BoE/PRA and FCA including their joint AI Discussion Paper (October 2022) — and corresponding Feedback Statement (October 2023), the AI Public-Private Forum (AIPPF) final report (February 2022) and their 2019 & 2022 Machine Learning surveys.
Mapping against government's AI principles
In its original White Paper, the government outlined five cross-cutting principles that regulators should fold into their remits (below). In their published strategies, the regulators have now outlined how these principles can be addressed through their existing frameworks:
Future work
The regulators propose to:
What does this mean for firms?
It's hard to overstate the potential capabilities of AI for a sector as vibrant and diverse as UK financial services.
The BoE/ PRA and FCA's strategies both leverage existing regulatory parameters, with enough flexibility for different providers to develop innovative use cases that meet their customers' specific needs. This UK approach — which is integrated and outcomes-based — differs from other jurisdictions (notably, the EU) which have built bespoke and relatively prescriptive rulebooks. Rather than creating new (and potentially siloed) capabilities, UK firms are being prompted to account for AI risk within their existing risk management structures.
On the flip side of this however, UK regulators' use of higher-level `outcomes', leaves the bulk of navigating practical implementation to the firms themselves. As such, there is no time to waste — and firms should act now to ensure they are meeting expectations and accounting for AI risks throughout their end-to-end business models.
How KPMG in the UK can help
KPMG in the UK has experience implementing AI solutions both internally and externally. KPMG professionals' extensive technical, operational and risk experience can help accelerate your ambitions in this space. This can be done by helping you define your AI strategy, governance, risk and control frameworks, and use case definition — right through to working with you to build effective technology solutions that are designed to add value to your business, underpinned by effective security, data and technology controls.
If you would like to discuss, please get in touch.