Introduction – model risk

In a progressively data-driven world, the last decade has seen a significant increase in the use of risk models for fraud detection. However, as these models become more widespread, so do the risks associated with them – ranging from data security threats to ethical & privacy concerns and regulatory compliance. The growing interest in Artificial Intelligence (AI) forecasts more usage of AI in risk models, bringing great benefits but also increasing the potential risks. With regulators actively pursuing safe, sound, and responsible ways for this adoption, it is crucial to effectively manage the risks inherent in AI-driven models.

Benefits of the use of AI in models

While both AI models and non-AI models involve mathematical representations, the key difference between them is the level of complexity, adaptability and transparency in the algorithms and approaches used. The primary goal of a model is to simplify and explain a phenomenon or problem in a way that can be used to make predictions or decisions. However, an AI model is specifically designed to learn from data, identify patterns, and make predictions or decisions based on that learning. As a result, some models are so ‘big’ that the individual elements of the model are easy to understand, but the resulting emergent behavior of the model is difficult to (fully) understand.

There are great benefits to using AI within models. AI models are much more sensitive to input data when compared to other generic (fraud) risk models. As a result, one of the main benefits of AI models is getting insight and finding patterns in data that would otherwise be hard to find. For example, AI can help detect fraudulent transactions by analyzing complex behavioral patterns and flagging anomalies that human analysts or traditional models might miss. Additionally, models can benefit from the use of AI when tuning the parameters as part of the functioning of the model. This task may be enhanced by using AI to test various settings and find a (near-)optimal configuration.

Risks of AI

One of the main issues for the applicability and trustworthiness of an AI model is that complex models, for instance a deep learning model or neural network, are inherently difficult to understand and lack transparency. This can create a so-called ‘black box’, where users know the input and output but do not fully understand how the output is derived from the input. This phenomenon leads to difficulty in assessing these learning systems. Explaining the internal decision-making of such models is one of the greatest challenges in the field of AI. Additionally, while AI models excel at identifying patterns that human analysts may not (fore)see, there is also the risk of bias inherent in the underlying data, which can influence the outcomes of AI models.

To reduce the likeliness and impact of undesired or biased outcomes of AI models, and to promote responsible usage of AI, the European Union has introduced the first legal framework worldwide that specifically addresses the usage of AI: the European AI Act (EAIA). The goal of this legislation is to ensure that AI is trustworthy, safe, transparent, fair, and accountable. The EAIA classifies AI models based on the risk they pose. High-risk applications will have to meet stricter requirements, while unacceptably risky uses of AI are banned completely. It also encourages ethical development and usage of AI. 

Validating AI models

In order to utilize a model that performs as expected and complies with legislation, the validation of a model is crucial. KPMG developed a risk-based model validation methodology that outlines a structured process for thoroughly understanding a model's information, data, assumptions, processing, and reporting components. When validating an AI model, this methodology emphasizes different aspects compared to the validation of general risk-based models, including a focus on the responsible use of AI. To get an insight into an AI model, the following aspects of a model are validated:

1. Governance and documentation. This aspect validates the allocation of roles and responsibilities, and processes. Specifically in relation to AI models, it focuses on whether management understands why and how AI is used within the models, taking into account the objective and, e.g., the EAIA.

2. Conceptual model and technical design. The concept of the model and the technical design should explain the choice to use AI (and the underlying algorithms used) and must show whether the setup and its results are explainable and in line with the objective. It should also evaluate the responsible usage of the model.

3. Functioning of the AI model. This aspect analyzes whether not only the setup but also the technical functioning of the model fits the hypotheses and assumptions of the AI model. For example, this could encompass a specific focus on bias testing or stability testing, by adjusting the input of the model and evaluating the output.

4. Evaluation and accountability. If the AI model functions as intended, the final component is the periodic evaluation of and accountability for the AI model, including the required human oversight. This also entails a specific focus on the data quality in the AI model, as AI models are more sensitive to data quality changes than rule-based models, which can have an unknown and material impact on the model. The AI model must fit within its context and take into account changing circumstances in laws and regulations, as well as changes in input data over time, so-called “data drift”.

As AI usage expands, a thorough understanding and robust validation of these models is critical for predictive accuracy, decision-making, responsible usage and regulatory compliance. How do you as an organization manage the risks of AI? KPMG can assist you with the policies and implementation of AI models, and our validation framework will enable you to navigate AI usage challenges with confidence and assurance. Want to know more? Visit Forensic Services or contact us directly.

We will keep you informed by email.
Enter your preferences here.