The insurance industry runs on data. Data drives decisions from underwriting to claims to pricing, as well as customer interactions, marketing and products. The exponential growth in the volume of structured and unstructured data available to insurers provides the opportunity to make faster, more informed decisions and to operate much more efficiently. The challenge has been how to harness the potential of all of that data. Like so many other businesses, insurance companies are turning to artificial intelligence (AI) and other big-data technologies for help. With the rise of insure-tech, insurance companies are finding they can actually gain a competitive advantage with data-driven technologies that enable them to make decisions faster and serve customers better.

As more companies develop use cases, along with the continued advancement of the technology, the use of AI is expanding rapidly. But with the rise of AI, machine learning and other data-driven technologies, have come more questions about how to ensure they are used ethically in ways that enhance rather than diminish trust and customer confidence. How can companies and customers be assured that the data and algorithms used to make critical decisions are not biased or otherwise untrustworthy?

It is an issue that insurance companies everywhere are grappling with, together with government and oversight bodies at the global, regional and country levels. From a regulatory standpoint, the focus to date has been largely around data privacy and protection of personally identifiable information, most notably with the introduction of the EU’s General Data Protection Regulation (GDPR) in 2018.

But the algorithms and processes used to interpret and make decisions based on that data also come under scrutiny. GDPR’s protections extend to encompass the use of algorithms to make decisions that have significant legal effects on individuals.1

Since GDPR took effect, numerous bodies have put out guidance to support companies in the ethical use of data and AI, including the OECD, European Commission (EC), and in the UK the Financial Conduct Authority (FCA) and Open Data Institute (ODI).

It also appears clear that business, and the insurance industry in particular, welcome the guidance, with many believing governments should play a larger role. The Association of British Insurers has urged governments as well as companies to establish ethical rules for the use of AI2 and Insurance Europe has similarly called on the EC to put a framework together of prescriptive and voluntary measure to promote the ethical use of AI. In the US, the National Association of Insurance Commissioners (NAIC) has adopted guiding principles on AI, based on the OECD’s AI principles.3

Overall, there is a lot of commonality among the models and guidelines on the responsible use of AI that have been issued to date. Most are principles based and voluntary, providing organizations with a lot of flexibility on whether and how to implement.

The insurance sector in the Netherlands has taken a somewhat different approach and significant step forward with the introduction of the “Ethical Framework for the application of AI in the Insurance Sector,”4 (PDF 2.1 MB) by the Dutch Association of Insurers (DAI). The DAI represents the interests of private insurance companies operating in the Netherlands, its members represent more than 95 percent of the Dutch insurance market.5 The framework reflects the recognition by Dutch insurers of the need to be proactive in use of AI and other data-driven products and process in their relationships with customers.

With all of the guidance and recommendations put forward by other government and industry bodies, the DAI recognized there was no need for just another set of general guidelines. In reviewing the existing guidelines and discussing with its members, it became clear that most guidance tended to be high-level and abstract and not always straightforward to apply on a day-to-day basis. So, with its Ethical Framework, the DAI and its members set about to provide Dutch insurers with a more actionable set of policies to follow.

The framework is based on the recommendations of the High-Level Expert Group on Artificial Intelligence. This European Commission’s advisory body determined that for an ethical use of AI, seven requirements for responsible AI should be respected:

1. Human agency and oversight.

2. Technical robustness and safety

3. Privacy and data governance

4. Transparency

5. Diversity, non-discrimination and fairness

6. Societal and environmental well-being

7. Accountability


The DAI chose not to take a greenfield approach in developing the criteria and rules set forth in the Framework, instead adopting the recommendations of the European Commission’s High-Level Expert Group on Artificial Intelligence. It took the overarching principles set forward by the EC and focused them to apply to the particular needs of the Dutch insurance sector.

Then it went a step further and decided to make the Framework binding on DAI members as of 1 January 2021.

It is important to note that while the title of the Framework could give the impression it applies strictly to artificial intelligence, in fact it encompasses the use of data for data-driven decision making more broadly. It is also about more than tools and technology; it requires insurance companies use the right data for the right purposes and comply with data privacy requirements.

Proper use of data is critical to the Framework. In 2020 the DAI was confronted with a case of data being used improperly to identify fraud. In this case, it was found that if a customer just made an informal inquiry to the insurance company about what to do when damage had occurred, that information was entered into the insurer’s system as a “claim.” This erroneous use of data could ultimately lead to issues of insurability if there was not human intervention.

This is where principles of human involvement, monitoring and transparency become extremely important. These are hallmarks of the Dutch Framework.

Transparency with respect to the use of data-driven applications is key, and the DAI expands on the approach of GDPR and other ethical codes by making transparency more proactive. The Framework requires companies to consider how to best explain outcomes from AI or other data-driven applications to customers, before an application is deployed. This means customer service agents and other customer-facing employees need to be brought into the loop prior to implementation and be prepared for questions or faults in the application that may arise.

The requirement of human involvement and transparency with respect to the data-driven applications being used is found throughout the framework, with specific provisions for human agency and oversight of AI as well as human control and supervision of applications.

According to Richard Weurding, General Director, the DAI made it a priority to require the right balance of humans in the loop when applying data-driven technologies. “Human governance is hugely important; there can’t be total reliance on technology and algorithms. Human involvement is essential to continuous learning and responding to questions and dilemmas that will inevitably occur. Companies want to use technology to build trust with customers, and human involvement is critical to achieving that.”

This article is featured in Frontiers in Finance – Resilient and relevant

Explore other articles › Subscribe to receive the latest financial services insights directly to your inbox ›

The DAI knew that companies couldn’t just flip a switch and meet the requirements of the Framework. Considerable planning and operational changes were required to prepare their people and ensure the right data was in place so data-driven applications would be compliant with the framework. 

In advance of the Framework taking effect, the DAI worked with KPMG in the Netherlands to get the word out and inform insurers of what they needed to do to meet the new requirements. The KPMG team organized a series of webinars and developed toolkits with actionable steps Dutch insurers needed to take in order to meet the controls, standards and risk requirements of the Framework. Not surprisingly, one of the biggest challenges for insurers is data quality. In order to comply with the Framework, a foundational requirement is ensuring that AI and other data-driven applications are based on proper data sets.

The DAI plans to monitor implementation and in two years will assess how well insurers are working with the Framework and ensure that it is driving responsible use of data and meeting the needs of customers.

To be sure, there will be further debate and likely additional regulation of AI and other data-driven technologies, as well as the data that is behind it. With the launch of the Framework, insurance companies in the Netherlands are in a leading position for whatever is to come.

Richard Weurding

Richard Weurding

Since March 2006, Richard has been the director general of the Dutch Association of Insurers and part of the management board of the Association. He previously was the executive secretary and was closely involved in several innovation projects within the organization.

Jos Schaffers

Jos Schaffers

Jos is a policy advisor for the Dutch Association, specialized in data protection and ethical use of technology. He developed a yearly monitor on differentiation (the solidarity monitor), to see whether the use of more data leads to less insurability. Jos studied public administration in Leiden and has worked for the Dutch Association since 2007.

Connect with us

Contributors

Richard Weurding
General Director
Dutch Association of Insurers

Jos Schaffers
Policy Advisory, Privacy and Big Data
Dutch Association of Insurers

Frank van Praat
Senior Manager, Trusted Analytics
KPMG in the Netherlands

Sander van der Meijs
Senior Manager, Digital Transformation
KPMG in the Netherlands