With the 2018 publication of the TRIM Guide to General Topics the ECB increased its attention on models and how they affect management's decision making and overall understanding of risk. Many view European supervision as being a latecomer to the topic of model risk, particularly compared to the US. As new models and techniques are developed, will the current European guidelines be sufficient to safeguard banks and their customers?

As the complexity of banking grows, models are becoming increasingly prominent in decision-making. Models are used to gauge the risks banks take on, how much capital they should hold, and how this might vary under different scenarios. In areas like credit scoring, models are even taking their own decisions.

Of course, blindly trusting the outputs of these models is a risk in itself. Hence the term “model risk”. The US Fed SR 11-7 paper, published in 2011, defined model risk as the potential of adverse consequences from decisions based on incorrect or misused model outputs.

In 2017 the European Central Bank (ECB) followed suit when it published the TRIM Guide for Internal Models and noted model risk management as a control for flaws in the development, validation and use of models. Our conversations with banks suggest European banks are taking quite a fragmented approach to this area. Some have developed model risk monitoring and reporting, but others view it as a secondary concern. Banks within the scope of US SR 11-7 are further along in implementing effective controls for model risk, as well as typically encompassing a wider scope of models.

Tackling model risk is not easy. For a start, the definition of a model has never been clear cut. While there are no hard rules, many banks would be more likely to view a trading algorithm as a model than, for example, a simple spreadsheet. The rapid advance of artificial intelligence (AI) and machine learning (ML) threatens to complicate the picture. For now, AI or ML are more likely to be used as `challengers' than as a primary tool. This area of models is perhaps where we find the most interesting aspects of evolving model risk management practice.

For some people, AI and ML can conjure images of robots running amok. Of course, reality is more mundane. Banks are applying AI to tasks such as extracting text from policy documents, tailoring the responses of customer service bots and scanning large volumes of transactions to spot indications of fraud.

The ability of AI to detect useful signals from large volumes of unstructured data has many benefits for banks. It allows analysts and managers to focus on more strategic questions. It can also uncouple the human biases that drive a lot of existing models and processes, and generate new insights from existing data.

However the use of AI can be a double-edged sword. A hypothetical bank that applies ML to millions of transactional records might make some surprising or even nonsensical findings. Let's say it discovers a history of very few defaults among self-employed customers with five or more credit cards. The machine appears to have found a desirable population in terms of risk, but without a human to evaluate and sense-check, there is always the risk of a “black swan” to break the rule (in this example, once banks tighten their lending criteria these customers become a much riskier proposition since they can’t move their debt to a new card).

A lack of transparency is another difficulty, and could be the biggest obstacle to large scale take up and acceptance of ML and AI processes. When people cannot understand the mechanisms and logic a model uses, there will always be an understandable degree of scepticism - particularly on the part of banking supervisors. Improper decisions by `black box' models carry huge reputational risk. That means that the expertise of model validators needs to match the sophistication of the model itself.

So what can banks do to stay on top of emerging model risks? The supervisory expectations of the ECB on model risk can be found in the TRIM guide. However we feel that these expectations are still maturing, and the general exhortation for banks to “have a model risk management framework” may frustrate those looking for specific advice on AI or algorithm-related model risks. With that in mind, we suggest that banks should:

  • Apply a regular validation process to all models. If the model is of considerable complexity, where expertise is currently lacking, banks should consider upskilling initiatives or different working modalities (for example AGILE) to bring to bear relevant SME knowledge. When doing this, however, care should be taken to maintain the adequate independence of the validation unit.
  • Consider the perimeter of their model risk framework. Once outside this scope, a robust controls process should be in place as a substitute for model validation. At times, there may need to be a mix of controls and validation. One example comes from the many hundreds of equity research models that some banks hold, which may be too numerous to individually validate.
  • Embed a solid risk culture through a program of communication and education. It is often left to risk functions to write bank's model risk management frameworks, but it requires far more effort and accountability for a document to become an active process. The pace of change around AI and ML processes means that users must be aware of the limitations and potential impacts of model misuse.

Lastly, banks shouldn't be deterred from exploring the potential benefits of AI and ML, even if just as a challenger or a pilot of limited scope. Despite the issues outlined above, we do not believe these techniques should be avoided. As models become more and more complex, the levels of controls and understanding from model owners needs to adapt accordingly. One only has to look at some of the products developed in the run up to the Great Financial Crisis to suspect that the next generation of model risks won't come from out of control AI, but from our own hubris that we understand these complex models.

Connect with us