The use of Artificial Intelligence (AI) and Machine Learning (ML) in financial services may enable firms to offer better products and services to consumers, improve operational efficiency and risk management, increase revenue and drive innovation. However, it can also pose new challenges for firms and regulators and amplify existing risks.
The regulatory landscape has been developing across the EU and the UK (see our previous article) and the Bank of England (BoE), PRA and FCA have now published a joint Discussion Paper (DP) on the regulation of AI and ML in UK financial services.
The use of AI and ML in financial services is evolving rapidly, with implications for consumers, firms, markets and the supervisory authorities. Adoption is ramping up both within established financial services firms and in newer fintech and Insurtech companies and through partnerships and investment with smaller AI-specific vendors.
Firms are applying AI/ML to more material business areas and use cases — from anti-money laundering and credit and regulatory capital modelling in banking to claims management, product innovation and pricing and capital reserve modelling in insurance to order routing/execution and influencing portfolio management decisions in investment management.
They are also using more complex AI/ML techniques, often linked to the use of increasing volumes of data and the quest for improved performance and delivery of better customer outcomes. Over time, the complexity and sophistication of models are likely to increase further.
AI is used across many other sectors of the economy. The UK Government, through the Office of AI, is expected to issue a White Paper on regulating AI in late 2022/early 2023.
Building on the work of the AIPPF
The DP follows the February 2022 final report (PDF 12.4MB) of the AI Public-Private Forum (AIPPF) which was convened by the BoE and the FCA in 2020 to promote dialogue on AI innovation and its safe adoption within financial services.
The work of the AIPPF was structured around three key areas of the AI lifecycle — data, model risk and governance. AI risk can arise in each of these areas within interconnected AI systems and could ultimately pose risks to the whole financial system.
The final report explored the various barriers to adoption of AI, the challenges and risks in each of the key areas, and possible solutions. The report concluded that clarifying regulatory expectations, including the application of existing regulation on the adoption and use of AI, would be a key component in fostering innovation. The AIPPF also recommended that regulators should identify the most important and/or high-risk use cases for financial services in order to develop appropriate mitigation strategies and policy initiatives. Formal consultation was proposed on industry best practice or guidelines, with suggestions also that it could be beneficial for an industry consortium to develop solutions and industry-wide standards, and for AI practitioners to be certified.
Aims and structure of the DP
The new DP picks up on the work of the AIPPF, in particular the issues of clarity around both existing and future regulation. It aims to share and obtain feedback on:
(i) The potential benefits and risks related to the use of AI
(ii) How the current regulatory framework applies to AI
(iii) Whether additional clarification of existing regulation may be helpful
(iv) How policy can best support further safe and responsible AI adoption
Chapters 1 to 3 cover, respectively, the structure of the DP, the regulators' objectives and remits (including definitions of AI and the AI lifecycle, and context on how AI is increasingly used), and the risks and benefits of AI.
Potentially of most interest to firms, and where the most feedback is requested, is Chapter 4. This details rules, regulations, principles and guidance and considers the ways in which they could be relevant in supporting the objectives of the regulators — consumer protection, safety and soundness, insurer policy protection, financial stability and market integrity, and promoting competition — while mitigating AI risks. For example, it discusses whether the incoming FCA Consumer Duty and other FCA Principles would require firms to be able to monitor, explain and justify where their AI models result in differences in price and value for different cohorts of customers.
Chapter 5 summarises the regulators' questions. These focus on three main areas: supervisory authorities' objectives and remits, the benefits and risks of AI (and where supervisory action should be prioritised) and regulation — specifically whether AI can be safely and responsibly adopted under the existing legal and regulatory framework.
The feedback period closes on 10 February 2023.
What does the DP mean for firms?
The DP asks a significant number of open-ended questions and seeks to garner feedback from the industry to see whether fine-tuning is needed to the existing regulatory framework, or whether a new approach is required. A similar approach was taken with the FCA's first Consumer Duty consultation, which was followed by a more fully-formed paper once industry views had been gathered.
A follow up speech by Jessica Rusu, the FCA's Chief Data, Information and Intelligence Officer, provided further insights regarding the FCA's likely direction of travel, focusing on firms' responsibility and accountability in relation to the use of AI. She suggested that UK financial regulators already have a framework — the Senior Managers and Certification Regime (SMCR) — that can be applied to the regulatory challenges posed by AI. Responsibility for the management of AI risks should lie with firms—there is a risk of treating AI as autonomous or having agency, which could appear to remove accountability for decision-making away from senior managers. The DP asks whether, and how, creating a new Prescribed Responsibility for AI to be allocated to a Senior Management Function (SMF) would be helpful in enhancing effective governance of AI. It also asks whether further guidance on the 'reasonable steps' element of the SMCR, in an AI context, would be helpful.
Rusu also emphasised that firms' governance frameworks around the use of AI should address the following challenges:
1. Responsibility — who monitors, controls and supervises the design, development, deployment and evaluation of AI models
2. Creating a framework for dealing with novel challenges, such as AI explainability
3. How effective governance contributes to the creation of a community of stakeholders with a shared skill set and technical understanding
Finally, Rusu stressed the importance of high-quality data to drive good AI outcomes and that the use of data must be compliant with data protection legislation.
Looking ahead
Although there is still some way to go in fully understanding and clarifying requirements for the management and mitigation of AI risks in financial services, a regulatory path is slowly emerging, with many of the building blocks already in place. The safe and effective adoption of AI will require a regulatory approach that builds, maintains and reinforces trust at all stages of the lifecycle, for practitioners, end customers and supervisors. Given the potential conduct, market, prudential and resilience impacts, all firms using or consider using AI models should take the opportunity to engage with this discussion.