cancel

The UK’s evolving pro-innovation approach to AI regulation

Impacts on financial services firms

Blue 3D wave

The regulatory framework for financial services firms using Artificial Intelligence (AI) in the UK is evolving — with the Department for Science, Innovation and Technology (DSIT) having now published the response to their 2023 White Paper. Overall, this response reaffirms the `agile and principles-based' approach originally proposed. However, the government has pledged extra support for regulators (in the form of money and guidance) and has laid out the potential for future binding requirements on developers of the most advanced systems. Regulators have also been asked to outline their strategic approach to AI by 30 April.

Background


The UK government published its original AI Regulation White Paper in March 2023 — for a full summary, see our earlier article here. In short, this proposed that existing regulators be empowered to fold a set of five cross-cutting principles into their remits:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Importantly, the White Paper emphasised a reticence to intervene with bespoke rules for AI. Instead, existing frameworks like the Consumer Duty and the Senior Manager and Certification Regime (for FS), would be used to ensure firms take ownership of the appropriate use of (and governance over) any AI models. The FCA and Bank of England (BoE) have already aligned themselves with this view, through recent speeches and the tailoring of the Model Risk Management Policy. 


White Paper Response


Following a review of stakeholder feedback, the response to the White Paper has now been published. It emphasises the following key points:

Flexible approach with power delegated to existing regulators

  • The government has reiterated that a “firmly pro-innovation” approach will make the UK “more agile” than competitor nations. It will also allow the framework to benefit from sector-specific expertise.
  • The UK will not rush to legislate, or risk implementing `quick-fix' rules that would soon become outdated or ineffective.
  • The government has reaffirmed its initial non-statutory approach for regulators to enforce the five AI principles. This offers adaptability, especially while the government is still establishing its approach — but this will be kept under review.

Key regulators (including the FCA and BoE) will publish plans by end-April on how they are responding to AI risks and opportunities

  • These plans must include (i) an outline of steps they are taking in line with White Paper expectations, (ii) an overview of AI-related risks in their areas, (iii) their current AI skillset, and (iv) how they plan to regulate AI over the next 12 months.
  • The FS regulators have already begun building towards this requirement — through the publication of the AI Public Private Forum report in 2022 and their Feedback Statement to DP5/22 in 2023.
     

 

New support for regulators

  • A £10 million package of funding has been committed to boost regulators' AI expertise and help them develop cutting-edge tools to monitor and address risks in their sectors. Regulators will also be able to work with government departments to review potential gaps in existing regulatory powers and remits.
  • The government has published additional guidance to support the implementation of the five AI principles — with further updates planned by Summer 2024.


     

 

Acknowledgement of the likely need for future binding requirements on developers of the `most advanced general-purpose systems' once the understanding of risk has matured

  • Overall, the government will consider legislative change to introduce binding measures if existing mitigations prove inadequate and voluntary measures are not implemented effectively. These measures would build on steps already taken by existing regulators.
  • Particular emphasis has been given to highly capable general-purpose AI because it poses a challenge to the White Paper's context-based approach. As these systems can be used in a wide range of applications, any underlying risks could quickly spread across multiple sectors of the economy. In other words, the risks could not be managed through the remit of any one regulator.
  • Any new binding measures would be directed at the `up-stream' developers of in-scope systems rather than the end-user firms (including FS firms). These measures would ensure adherence to the five AI principles — e.g. could include transparency measures, accountability, corporate governance obligations or actions to address potential harms.
  • An update on these plans will be published by the end of the year.

Coordination across government and regulators through a Central Function

  • A new multidisciplinary team (within DSIT) is being recruited to conduct cross-sector risk assessment and monitoring. In particular, this team will evaluate the effectiveness of any interventions and create a `single source of truth' on AI risks (i.e. a risk register) — which can be leveraged by regulators for their own policies.
  • Establishment of a Steering Committee as a formal coordination structure between the government and regulators and appointment of lead AI Ministers in each department.
     

Building on voluntary measures

  • The government will continue to rely on firms' commitment to certain voluntary measures — such as the ones agreed at the recent UK AI Safety Summit. This will bolster any specific requirements developed by regulators in their sectors.






 

 

Further commitment to the AI Safety Institute

  • The government has committed further investment in the AI Safety Institute — which was established during the recent AI Safety Summit. Although not a regulator, this Institute will act as an `early warning system' for some of the most concerning risks.
  • One of the Institute's core functions is to develop and conduct evaluations of advanced AI systems — with leading AI companies having already pledged to provide priority access to their systems.
     

 

Wider support for the AI ecosystem and international cooperation

  • The DRCF AI Digital Hub pilot scheme application window will launch in Spring 2024. This scheme aims to provide tailored advice on regulatory issues to help businesses launch new AI products and services — the insights from which will inform the government's approach.
  • The government has committed additional funding to initiatives including the launch of nine new research hubs, building the next generation of supercomputers, and a partnership with the US.
  • Other guidance will be published throughout the year — including an AI Assurance report and advice on the use of AI within HR.
    The government will continue to act through bilateral partnerships and multilateral initiatives — including future AI Safety Summits.

What this means for FS firms


In the coming months, the government will formally establish activities to support regulator capabilities and coordination, including its Steering Committee. They will publish findings from stakeholder engagement (via a series of expert discussion papers) and conduct new targeted consultations. Some common themes that should be addressed include intellectual property, data bias and the risk of discrimination, data protection and transparency, competition and impacts on the workforce.

UK financial services firms should continue to keep abreast of all consultations issued by the FCA, PRA and BoE, as they will be charged with the regulation of AI for this sector. Our previous article summarises the guidance that has been issued so far to date. In particular, the FCA and BoE have now been asked by the government to publish their future plans by the end of April. FS firms should also keep informed of any requirements from adjacent regulators that may have some remit over their operations — e.g. the Information Commissioner's Office.

In the meantime, as AI adoption continues to increase, firms should proactively begin adapting their risk management and governance frameworks — e.g, by establishing an effective suit of controls aligned to their risk appetite. Given the emphasis on Senior Managers' responsibility and the Consumer Duty, these should be prioritised. Firms could also choose to apply for the DRCF pilot scheme for support in understanding the regulatory framework.  

The fact that the UK government will continue to review the need for future binding requirements on the developers of general-purpose AI systems may bring comfort to FS firms as they continue to consider and/or increase the use of these systems within their applications and services. 

With political agreement now reached on the EU AI Act (see more in our article here), and regulatory frameworks developing in the US, firms with a footprint across multiple jurisdictions should consider how they will navigate any divergence. 

How KPMG in the UK can help


Within KPMG in the UK, we have been developing and implementing AI tech solutions internally and externally. Through our extensive technical, operational and risk experience, we can help accelerate your ambitions within the AI space. This can be done by helping you define your AI strategy, governance and use case definition — right through to working with you to build effective AI tech solutions that are designed to add value to your business, underpinned by effective security, data and technology controls.

If you would like to discuss, please get in touch.


Related insights

Insights

The path to collective cyber resilience

Building resilient digital ecosystems through the power of digital trust

Insights

KPMG recognized as a Leader in Cybersecurity Consulting Services in Europe

According to The Forrester Wave: Cybersecurity Consulting Services in Europe, Q1 2024.

Connect with us

Kate Dawson

Wholesale Conduct & Capital Markets, EMA FS Regulatory Insight Centre

KPMG in the UK

Bronwyn Allan

Manager, Regulatory Insight Center

UK

Douglas Dick

Head of Emerging Technology Risk

United Kingdom


Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat