Banks’ growing reliance on models as well as their increasing complexity with the advent of emerging technologies makes it imperative for banks to deploy a robust model risk management program. Recent guidance from the ECB, and its interim TRIM findings, also show that almost all investigated banks need to do better in this area – especially when it comes to data quality. Banks should act now to ensure they are prioritising the most important areas of model risk management.

Banks are relying more and more on models to measure and manage risk and facilitate their ever increasing scope of decision-making mainly to meet heightened regulatory requirements. In addition, the rapid use of emerging technologies such as machine learning also means that the models themselves are fast becoming more complex and opaque. In turn, supervisory and regulatory scrutiny of model risk management (MRM) has significantly increased in the past years.

Therefore it is crucial for banks, and Significant Institutions (SIs) in particular, to have a robust MRM framework. The European Central Bank’s (ECB) guide to internal models – general topics chapter (‘the Guide’) defines an effective MRM framework including written MRM policies; a firm-wide model inventory; a system of model classification; guidelines for model risk measurement; procedures for model risk reporting; and a framework for model risk governance. When it comes to model risk measurement, the Guide emphasises the qualitative aspects of model risk and the need for consistency across banking groups. Data deficiencies, model misuse and implementation errors are identified as examples of qualitative model risks. In recent years, the ECB’s focus has been on qualitative aspects of model risk, especially data quality, seems to be strengthening. This is illustrated by the ECB’s letter to banks in April 2019 setting out its findings from the Targeted Review of Internal Models (TRIM) project, specifically indicating that nearly all on-site investigations (OSIs) revealed issues related to data quality. The letter focuses heavily on inadequacies in data management and data quality processes, particularly in credit risk models and IRB modelling. Some of the most notable aggregate findings are that:

  • 91% of OSIs had findings related to 'data management and data quality processes'

The findings unveiled under the data management and data quality processes largely include weaknesses in areas related to the data quality framework's governing principles, policies, the allocation of roles and responsibilities, the metric approach for monitoring data quality, and processes for data quality incident management and reporting. These findings support our own market observations, which suggest that poor data quality and insufficient model documentation are leading sources of findings raised as a result of model validation.

Even more recently, in July 2019, the ECB published a revised version of the risk specific chapters of the ECB guide to internal models. The amended risk specific chapters reinforces supervisory expectations relating to the maintenance and use of data. Key areas of change include data maintenance, the use of external data and ratings, and the interaction between model data, model processes and human judgement (see Figure 1).

Connect with us

Figure 1: Summary of data-related changes to the ECB guide to internal models – Risk specific chapters (July 2019 version)

Summary of data-related changes to the ECB guide to internal models – Risk specific chapters

Note: The numbers in brackets are referring to paragraph numbers of the ECB guide to internal models –risk specific chapters (July 2019 version)

All this means that many banks have work to do to ensure that they're meeting supervisory expectations on MRM – especially in relation to qualtitative aspects of model risk. In light of the revised ECB guide to internal models and the interim TRIM findings, we believe that banks should adopt the following measures, if they have not done so already:

  • Effective data governance - Supervisors expect banks to establish a sound data governance framework. This should include defining key rules, processes, roles and responsibilities. For example, some banks appoint a Chief Data Officer (CDO), who typically issues internal policies, defines roles and responsibilities and sets up data quality controls in order to ensure effective firm-wide data management.
  • Robust data quality management - Banks need an effective process for vetting model inputs. For instance, banks should consider adopting the “four eyes” principle (i.e. maker and checker). This includes reviewing the accuracy, completeness and appropriateness of input data. Banks must also maintain end-to-end data audit trails, and segregate data quality assurance from data processing.
  • Data quality incident management and reporting - Banks should establish processes to review, report, log and track issues for remediation and improvement. Such logs and historical incidents could also be used for the purpose of training and setting up controls to prevent or mitigate the reoccurrence of similar issues.
  • Thorough implementation testing - Banks should ensure that all models are tested before deployment and – for high risk models – after every significant adjustment. The users involved in testing can eventually become trainers or subject matter experts especially in cases where the models are deployed firm-wide.
  • Comprehensive model register - Banks should maintain a thorough firm-wide inventory of every model used in decision-making In addition banks should conduct model inventory attestation process on a periodic basis.
  • Regular model back testing - Banks should consider performing periodic model back testing, mainly for models with sufficient historical data. Comparing expected outputs with actual outputs is key to ensuring that models are performing as intended. The performance of back testing in order to ascertain a model’s ability to capture risk exposure depends on the accuracy and appropriateness of the data. For instance, to back test a PD (probability of default) model, data related to instances where borrowers have actually defaulted needs to be collated so as to check the model’s prediction.
  • Capital allocation for model risk - Some banks may choose to hold additional capital to cover any potential losses associated with model risk.
  • Use of challenger models - Banks should consider comparing the inputs, methods and outputs of one model against those of an alternative, so as to adequately challenge assumptions or any judgemental parameters, thereby ensuring that models are reaching reasonable decisions. Further, this approach could be beneficial in highlighting the inadequacies related to implementation or model documentation, when challenger models are built based on the existing documentation.

Furthermore, the recent publication of supervisory priorities for 2020 (PDF 74 KB) including upcoming supervisory activities in relation to the European Banking Authority’s internal ratings-based (IRB) repair programme indicates that the supervisory focus on adequacy of internal models will continue to gain traction in the coming year. In this context, it becomes more critical for banks to establish a robust MRM framework in order to ensure suitable measures are implemented, so as to minimise the chances of models producing misleading or unwarranted results.

To sum up, MRM is not only a growing supervisory priority; it’s crucial to circumventing harmful decisions over numerous areas such as lending, risk management, provisioning and capital requirements. Therefore, a robust MRM framework is essential, and the ECB’s letter shows that nearly all banks could do better – especially on data quality aspects.

For more information, please click here to visit KPMG’s Model Risk Management (MRM) solutions and thought leadership hub page.