Publication 3 in our Model Risk Management Thought Leadership Series
In our previous articles, we explored Model Risk Management (MRM) from a broad industry perspective and then through an insurance sector lens. In this publication, we explore AI – how to manage the additional risks that AI models introduce and then also how AI can help us manage traditional model risks.
AI and its applications and challenges within the insurance industry
As outlined in our previous articles, a simple model error can lead to a model failing which could cause significant financial or reputational damage to your company. The exposure to model risk is heightened as models become more widespread across the business and as they become more complex.
The adoption of Artificial Intelligence (‘AI’) into the insurance industry can provide substantial advantages – this technology can help streamline underwriting, improve fraud detection and optimise pricing techniques. However, insurers need to consider the implications of increased adoption of more complex models across the business.
As part of their “Future Focused” strategy, the Central Bank of Ireland (the Central Bank) has identified increasing digitalisation across the insurance value chain as an emerging risk within the insurance sector. The rapid and ongoing advancements in digitalisation (referred to as the use of Big Data and Related Technologies (BD&RT)) presents increased opportunities, however, with these benefits comes the potential for new or increased risks. We have explored some of these key risks in more detail below.
As the popularity of AI models grow and become inherent in business operations, these risks need to be fully understood and appropriately managed.
Bias and fairness
Real life data can potentially have inherent biases which when used within a model gets passed through to the decision-making process. There are multiple ways in which AI models may start to behave unethically in terms of providing different opportunities, resources, information or quality of services to specific groups of people.
Use of biased data could in turn also affect the future data that will get used for subsequent model training. For example, a biased credit scoring model would impact the customer selection which in turn impacts the constitution of a future portfolio and hence the future input data. This creates a loop, and the bias keeps on propagating and gets reinforced over time.
Data privacy
AI systems introduce additional risks when handling sensitive data. These risks can include data breaches and privacy violations. Since AI relies on vast amounts of data for learning and prediction, understanding how data is processed becomes challenging, especially for those without expertise in AI technologies. Obtaining consumer consent is crucial, but AI systems pose challenges when they are frequently retrained.
As AI becomes more prevalent within organisations, insurers must ensure that the use of consumer data aligns with the original purpose for which it was gathered. Increased reliance on outsourcing arrangements, lack of appropriate controls and governance and the lack of clear roles and responsibilities around ownership can heighten these issues.
Increased model complexity
Traditional models follow a fixed logic and predefined steps, resulting in a rigid structure. In contrast, AI models continuously adapt by learning from new data and applying innovative techniques. This dynamic development process requires increased monitoring for effective management.
As a result, stakeholders must have a clear understanding of both the features and risks associated with the model. AI model users play a crucial role in explaining the underlying methodologies and model outcomes to all business stakeholders, ensuring transparency and informed decision-making.
Cybersecurity
Although cybersecurity and vulnerability to ransomware attacks are not new, addressing these concerns in the context of AI models requires a shift in our current approach. The use of large datasets introduces heightened risks of data breaches and tampering, potentially influencing model outcomes. While AI models offer substantial benefits for businesses, they also create opportunities for cyber criminals to intensify the volume and effectiveness of their attacks.
Regulatory compliance
Regulatory frameworks such as the EU AI Act, GDPR and DORA provide insurers with requirements and guidance for implementing AI systems within their organisations.
However, given the rapid evolution of the AI landscape, how these are implemented within companies must be regularly reviewed and updated to stay up to date with the latest developments.
Insurers and reinsurers operating globally need to stay vigilant about the different AI regulations they may be required to comply with, this includes the EU AI Act and any additional requirements and guidance that local regulators may provide.
Non-compliance with regulations leads to significant penalties, the severity of which depends on the specific violation. For example, the EU Commission has introduced fines up to €35m or 7% of worldwide annual turnover, whichever is higher, for non- compliance with the prohibited AI practices outlined within the EU AI Act.
How insurers can manage the risks with using AI models
The key components of model risk management which we introduced in our first publication still remain the foundation to managing model risk within an organisation. The implementation and use of AI models can increase the level of existing risk and additional AI specific risks can emerge within a business. To effectively manage this, the traditional model risk management frameworks needs some adaptation, with additional safeguards being introduced.
- Enhanced Model Inventory: An insurer’s model inventory must be expanded to comprehensively cover all AI models used within the organisation, whether developed internally or sourced from third parties. The firm-wide definition of a model’s materiality must be updated to reflect the additional risks associated with AI.
- Updated Model Tiering: The increased model complexity and reduced interpretability of AI models requires an insurer to review their model tiering assessment. This should be consistent with the insurer’s risk appetite, which will require a revaluation when AI systems are being introduced into their operations. This may mean that new risk measurements may need to be introduced, such as bias detection and areas such as an insurers definition of ‘unbiased’ and ‘fairness’ may need to be explored.
Correctly classifying the level of risk associated with each AI model used within a business is crucial. All AI systems deployed within the EU will require appropriate risk categorisation to ensure compliance with the EU AI Act. In this Act, the level of regulatory compliance for each model will be proportionate to the scale of risk associated with that model. Note that the EU AI Act states that the following systems are high risk “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.” - Additional Testing and Validation: The existing model review process and the risk assessment procedures may no longer be sufficient for AI models. More frequent and robust reviews are required to ensure the AI model remains appropriate. These additional tests and validations should include:
- Enhanced data validation and cleansing prior to use
- Avoid over reliance on few model parameters, which can result in overfitting
- Adopt techniques to mitigate bias, both in the data and modelling approach taken
- Additional testing on model outcomes to ensure integrity and interpretability of results
- Increased level and frequency of monitoring
- Control Framework: Introduction of AI systems into an insurance company will require an enhancement to the existing model governance framework. Given the nature of AI models, enhancements must be made to the data management procedures to ensure unbiased, high-quality of data is being used to develop and train AI models.
The security and privacy of the data being used must be managed appropriately. An improved operational risk control framework should be developed to understand the internal and external risks posed by AI systems to all stakeholders of the business. Additionally, the workforce within the business should undergo training in AI literacy and AI model governance to develop a strong culture of appropriate use and compliance.
While integrating AI models offers immense potential to insurers, they must prioritise transparency, data quality, ethical considerations, and regulatory compliance, to ensure they can harness the benefits of AI while minimising risks.
How AI can help reduce model risk
In response to the ever-evolving nature of the insurance industry, insurers are beginning to harness the power of AI right across the value chain to ensure they remain competitive in an environment that is becoming ever more technologically advanced.
In particular, insurers are deploying AI technology in their models in an effort to enhance accuracy and efficiency. As the complexity of these models increases, insurers will need to ensure they maintain a firm grasp on their exposure to model risk and AI could potentially be a strong ally in this endeavour.
Some of the many ways in which AI can help reduce model risk include:
- Automation of data collection and validation: Traditionally, insurance companies have relied on labour intensive methods of data collection, processing, and validation. However, the advent of technologies such as Generative AI presents to insurers the opportunity to significantly reduce the potential for errors, ensure data consistency, and minimise the risk of flawed models. Whether it be using the power of AI to analyse data from legacy systems, validate data submitted throughout the claims process or handle unstructured data from alternative sources, AI- driven data pipelines will enable insurers to enhance efficiency and accuracy.
- Enhanced Model Validation and Back-Testing: Traditional model validation often involves rigorous manual checks and back-testing which can command the valuable time and resources of the model owner. Furthermore, as time goes on these tests can become outdated and may become incapable of modelling the data available to the insurer. However, AI introduces a paradigm shift, insurers can now test and validate their models prior to being put into production using advanced and customisable tests. These tests allow insurers to identify biases in their data, monitor the degradation of the model over time and calculate metrics pertaining to the quality of the data and the model output. By using AI to automate these processes, insurers achieve faster validation cycles, adapt quickly to changes in their data and market dynamics and gain confidence in the reliability of their model.
- Superior Forecasting Accuracy: Standard models used by insurers typically fail to appropriately capture the non- linear relationships between macroeconomic variables, risk factors and insurance outcomes. This issue is often exacerbated during stressed scenarios. However, it is this domain in which AI models excel. Whether predicting claims frequency, assessing underwriting risks, or estimating reserve requirements, AI’s ability to handle complex relationships enhances insurers’ decision-making capabilities.
- Optimised Variable Selection: In the past, insurers may have been unable to fully identify the relevant features within their datasets that could be used as predictors for their overall exposure to specific risks. Furthermore, the limited capability of their models may have hindered the number of variables upon which they could base their results. AI could potentially make this an issue of the past, AI algorithms, coupled with Big Data analytics platforms, can process vast volumes of data and extract multiple variables. A rich feature set covering a wide range of risk factors leads to robust, data-driven models for pricing, reserving, and capital allocation. Whether it’s identifying relevant features for catastrophe risk modelling or optimising variables for mortality prediction, AI-driven feature engineering ensures models capture essential information.
- Understandability and Transparency:
- AI models may be used on traditional deterministic models to document model code, ingest and interpret code and create model documentation which can increase the transparency around these models. AI can also be used to track model changes and compare variable definitions across multiple models, this can increase the model change governance within your company.
- AI models often face criticism for being “black boxes”, however, techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) provide insights into model decisions. Insurers can understand why a model made a specific prediction, enhancing transparency and trust. Explainable AI not only satisfies regulatory requirements but also empowers insurers to communicate model outcomes effectively to stakeholders.
Conclusion
AI is a double-edged sword that insurers must wield with care. By embracing AI’s capabilities, insurers can enhance accuracy, streamline processes, navigate regulatory challenges and ultimately achieve results that would be too resource intensive if implemented using current models.
However, successful implementation requires thoughtful governance, careful management of risks, and collaboration between different business areas such as data scientists, actuaries, and business leaders. As we move forward, integrating AI into risk management practices will be essential for staying competitive and resilient in an increasingly data-centric world.
Remember, AI is not a magic silver bullet; it’s a strategic asset. With the right approach, insurers can transform model risk management and look towards a more data-driven future that benefits both their bottom line and their policyholders.
How KPMG can help
KPMG has a successful track record of providing a broad range of financial and strategic advisory services to clients across a wide array of industries related to model risk management.
We have developed KPMG’s Model Risk Management approach which can help you create a well-controlled, integrated, and comprehensive Model Risk Management programme and offers a practical framework for identifying, quantifying, and mitigating model risk by addressing the sources of risk head-on.
Depending on your specific needs, KPMG can assist with any combination of the components of a successful Model Risk Management programme including:
- Model inventory
- Model risk assessment
- Model development & implementation
- Technology solution
- Model validation
- Model policy & governance
- Model data aggregation & quality
- Internal audit assistance
Get in touch
Discover how to improve your Model Risk Management programme by talking to our in-house experts today.