cancel

Artificial Intelligence & Machine Learning

Regulatory approaches are being developed.

 

Abstract multi colored waves on neutral black background. 3D rendered image.

The application and impact of Artificial Intelligence (AI) and Machine Learning (ML) technology in the financial services industry is growing rapidly. The French prudential authority ACPR's 2022 studies on digital transformation found that insurers deem the implementation of AI technology to be the most promising avenue to help develop their business, and that the deployment of AI is maturing at a faster pace in the banking sector than, for example, distributed ledger technology.

The use of AI technology and models in financial services can bring benefits for households, firms and the economy. However, regulators are concerned that features associated with AI, such as computational power and speed, model complexity and data challenges have the potential to cause consumer harm, damage market integrity, impact the safety and soundness of individual firms and undermine wider financial stability.

"AI can amplify risk as well as benefits. For example, you only need some data to be inaccurate or skewed for customers to be unfairly profiled as high-risk and locked out of the market. On the flip side, more accurate data can enable personalisation and that can lead to better deals for some customers. Intelligent automation can also slash operational costs, with price reductions passed on to customers. It is vital that this sector can harness the value of AI for the benefit of society at large, to consider the ethical and social dimensions of AI."

Nikil Rathi,
Chief Executive
FCA

Regulators are responding with guidance and by modifying existing and developing new frameworks. Common themes are emerging which firms should consider in their adoption, development and governance of AI technology.

What is AI?

Definitions of AI vary. IOSCO defines AI as the study of methods for making computers mimic human decisions to solve problems. It includes tasks such as learning, reasoning, planning, perception, language understanding and robotics. IOSCO further defines machine learning (ML) as a subset and application of AI, which focuses on the development of computer programs —  designed to learn from experience without being explicitly programmed to do so.

The Bank of England's and FCA's Artificial Intelligence Public-Private Forum (AIPPF) defines AI broadly as `the use of advanced statistical techniques with large computational and data needs'.

Applications of AI in financial services

Firms can use AI to increase their effectiveness and efficiency. AI enables large scale data consolidation and analysis, which can be used in areas such as financial modelling — allowing for the consideration of probability projections and different scenarios  — and risk monitoring for financial crime and fraud. AI can enhance staff productivity by automating manual processes and can be used in customer relationship management tools, such as chatbots or virtual assistants, to enable customers to make more informed decisions about appropriate products and/or services.

In theory, AI-based processes can promote greater inclusivity. For example, credit assessments for mortgages can analyse a broader and larger amount of data leading to more consumers being offered mortgages. However, there are also risks of unfair discrimination if bias is present (inadvertently or otherwise) within AI datasets or built into AI programming.

Regulatory guidance on managing emerging risks

AI/ML technology can heighten existing risks and bring fresh challenges to established legal concepts, operational controls and regulatory frameworks. Regulators have begun to identify these risks and issue guidelines for firms across key themes:

1. Data

Firms and regulators deal with large volumes of unstructured data. AI technology can help to process and analyse this data, but the quality of analysis will be only as good as the underlying data, including the data which is initially used to design and train the AI models. 

The IOSCO guidance for intermediaries and asset managers using AI and ML states that `regulators should consider requiring firms to have appropriate controls in place to ensure that the data that the performance of the AI and ML is dependent on is of sufficient quality to prevent biases and sufficiently broad for a well-founded application of AI and ML'.

Data governance and record keeping is one of the six governance principles for ethical and trustworthy AI in the insurance sector as recommended by the EIOPA Consultative Expert Group. The principle on data governance and record keeping highlights that `the provisions included in national and European data protection laws (e.g. GDPR) should be the basis for the implementation of sound data governance throughout the AI system lifecycle, firms should ensure that data used in AI systems is accurate, complete and appropriate.'

The German Federal Financial Supervisory Authority (BaFin) supervisory principles for the use of algorithms (AI models) in decision-making processes also highlight that firms must have a data strategy which guarantees the continuous provision of data and defines the data quality and quantity standards to be met.

One of the advantages of AI is its ability to process large volumes of unstructured or `alternative' data. This has prompted growing discussion around the use of synthetic data, including an FCA call for input, as a way to build and test models where there is insufficient data or the `real' data can't be used because of privacy concerns/obligations. The AIPPF's final report suggests that it is good practice to have clear understanding and documentation of the provenance of data used by AI models, especially in the case of third-party data, and of the limitations and challenges of using alternative and/or synthetic data.

2. AI model design, explain-ability and control

As for traditional systems and models, regulators expect that firms should have a robust inventory, development and testing frameworks for AI and ML models. BaFin envisages that every AI model should go through an appropriate validation process prior to being made operational and should always be reviewed by an independent function or individual that is not involved in the original modelling process.

The AIPPF recommends that data science and risk teams should collaborate from the early stages of the model development cycle, while maintaining independent challenge.

IOSCO's guidance emphasises the importance of adequate skills, expertise and experience to develop, test, deploy, monitor and oversee AI controls and considers the level of disclosure of the use of AI by firms. Firms should understand their outsourcing dependencies and manage their relationship with third-party providers, including monitoring their performance and conducting oversight.

However, unlike tradition models and systems, AI models continually learn and develop over time. IOSCO maintains that: `regulators should require firms to adequately test and monitor the algorithms to validate the results of an AI and ML technique on a continuous basis'. In its Principle of Robustness and Performance, the EIOPA expert group suggests that performance of AI systems should be assessed and monitored on an ongoing basis. The Monetary Authority of Singapore (MAS) Principles on the use of AI point out that AI-driven decisions should be regularly reviewed and validated for accuracy and relevance, ensuring that decision models behave as designed and intended.

All regulators emphasise the importance of transparency and explainability to stakeholders, including internal control functions, regulatory supervisors and consumers. The EIOPA expert group recommends that explanations should be meaningful and easy to understand in order to help stakeholders make informed decisions. Consumers should be aware that they are interacting with AI systems and that these have limitations. The MAS principles state that data subjects should be provided, upon request, with clear explanations on what data is used to make AI-driven decisions about them and how the data affects decisions.

Regulators have highlighted the need for human oversight. BaFin's 'Putting the human in the loop' principle requires that `employees should be sufficiently involved in the interpretation and use of algorithmic results when reaching decisions'.

MAS has also highlighted the importance of ethics and the human factor in decision-models: `AI-driven decisions are held to at least the same ethical standards as human-driven decisions.' And the EIOPA expert group recommends that insurance firms should establish adequate levels of human oversight throughout the AI system's life cycle.

3. Preventing Bias

Unsurprisingly, preventing bias and discrimination in AI models is a key issue for financial regulators and requires consideration during data collection and model design and testing. The AIPPF suggests that good practice includes clearly documenting methods and processes for identifying and managing bias as inputs and outputs. The EIOPA expert group sets out a principle of fairness and non-discrimination.

4. Governance

With the increasing use of AI in different areas of financial services firms, regulators are highlighting the need for a comprehensive governance framework for AI.

IOSCO's guidance emphasises accountability: `regulators should consider requiring firms to have designated senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML'.

Good practice according to the AIPPF is to establish a central committee to oversee firm-wide development and use of AI. The AIPPF also recommends that firms should include AI-specific elements in their risk and privacy frameworks, and that decision-making should be guided by operational and ethical principles.

However, EIOPA's principle of proportionality: `insurance firms should conduct an AI use case impact assessment in order to determine the governance measures required for a specific AI use case' suggests that the extent of governance frameworks could be tailored to the specific risks posed by each use case.

Developing rules and legislation

Regulators' observations and guidance are likely to develop into rules. In the UK, the Bank of England and FCA are expected to publish a Discussion Paper later in 2022 which considers the appropriate role of regulators in supervising risks stemming from firms' use of AI and gathers views on how policy can best support further safe AI adoption.

As the use of AI increases, stricter requirements are also being introduced through legislation covering all sectors, not just the financial sector.

1. EU AI Act

The objectives of the European Commission's proposed Artificial Intelligence Act (AI Act) include: safe AI systems in the EU, the protection of fundamental rights and EU values, and enhanced governance and effective enforcement.

The Act assigns AI applications to three risk categories:

  • Applications and systems that create an unacceptable risk (e.g., government-run social scoring), which are banned 
  • High-risk applications (e.g., CV-scanning tools which rank job applicants) are subject to specific legal requirements 
  • Applications not explicitly banned or listed as high-risk are largely left unregulated 

The AI Act is not expected to be agreed until early 2023 and, as currently drafted, will come into force two years later.

2. UK developments

The UK Government's National AI Strategy recognises that AI will become mainstream in much of the economy and that the UK governance and regulatory regimes will need to keep pace with the fast-changing demands of AI. To date, the UK has had a strong sector-based approach to regulation of AI but the Government is reviewing whether this is the right approach. The Office of AI is due to publish a White Paper in 2022, setting out the UK Government's position on governing and regulating AI, including proposals to address potential risks and harms posed by AI technologies.

Our insights

Regulatory Insights

Providing pragmatic and insightful intelligence on regulatory developments.
concrete tunnel, 3D illustration, rendering.

Our people

Kate Dawson

Wholesale Conduct & Capital Markets, EMA FS Regulatory Insight Centre

KPMG in the UK