• Ian West, Partner |
  • Leanne Allen, Partner |
5 min read

For businesses, one of the most exciting aspects of artificial intelligence (AI) is its ability to give them a competitive edge. But with that potential comes a whole new set of pressures and risks.

Firstly, there’s pressure to invest in AI and machine learning (ML) technology, and drive the value it promises. But consumers, investors and society expect a responsible approach to using it in product and service development.

Secondly, stakeholders want firms to use AI and big data as a force for good – for example, by employing them to strengthen their ESG programmes. And there are particular demands on certain sectors: financial services firms, for example, are being asked to exploit AI to address socio-economic inequality and offer more sustainable financing.

At the same time, AI presents new and enhanced risks. Without robust governance and controls for its design and use, it can have a number of unintended consequences:

  • Bias in decision models can cause financial harm to some groups of consumers, by systematically treating them less favourably than others. This isn’t deliberate: it might stem from a lack of historical data on the disadvantaged groups. But it’s clearly an undesirable outcome – and one which could lead to reputational harm.
  • Unfair pricing practices may systematically lock certain groups out of products like insurance cover. This can happen if a lack of data on such groups removes them from the risk-pooling processes that underly insurance policy design.
  • Intentionally or otherwise, we’ve seen companies target consumers with products that don’t meet their needs. On occasion, this has been deliberate; but it can also be the result of inadequate data use.

Inevitably, issues like these reduce trust in businesses’ use of AI and big data. As firms transform to become more data and insight-led, resolving them will determine whether they succeed or fail. 

Immediate priorities

Given the complexities involved, organisations are duly building governance models and control frameworks for the use of AI, ML and big data.

That involves defining and operationalising their ethical principles. Most are taking a risk-based approach to this exercise. And they’re basing their AI practices on principles such as fairness, transparency, accountability and explicability (so that people can understand the reasoning being used by AI solutions).

With these priorities in mind, companies are focusing on a range of essential activities – including:

  • adapting their operating and governance models for ethical use of AI and big data
  • updating their risk registers to include data and AI risks
  • building control frameworks for AI-driven product design and development
  • setting up ethics boards or councils to challenge big data and AI use cases
  • ensuring compliance with relevant regulation, and preparing for any impending new rules
  • strengthening their privacy impact assessments
  • broadening the diversity of their product development and compliance teams to help eliminate bias
  • training their employees on the use of big data in AI, and the associated risks and ethical concerns
  • communicating their efforts to stakeholders to promote trust.

Guiding pillars

As they put these steps in place, how can firms grasp the opportunities AI offers, while safeguarding employees, customers and society?

From our work supporting organisations as they implement AI and ML solutions, we’ve identified five guiding pillars to help instil an ethical AI culture: 

1.     Start now

One of the starkest – and most immediate – challenges in AI deployment is the disruption it can bring to the workplace.

Helping your employees adjust to the role of machines in their jobs is a vital early step. Partnering with academic institutions is an effective way to educate, train and manage an AI-enabled workforce. And it will go some way towards allaying staff anxieties about the introduction of AI.

2.     Develop strong governance

You can’t gain stakeholders’ trust in how your company is using AI unless your leaders are doing two things to support accountability.

Firstly, understanding the data, control framework, methods and toolsets that have been used to build your AI systems. And secondly, implementing enterprise-wide policies on deployment, data use and privacy standards.

An appropriate, a risk-based (or proportional) approach to governance will ensure your controls are suited to the risks that AI presents, in the context in which they’re applied. That will allow innovation to progress safely and ethically.

3.     Address cyber risks

AI solutions are driven by sophisticated, autonomous algorithms. And these come with inherent cybersecurity risks – particularly as we don’t yet fully understand how they work. If hacked, they can be made to:

  • cause autonomous vehicles to misinterpret stop signs, speed limits, etc.
  • bypass facial recognition – e.g. on ATM machines
  • fake voice commands
  • misclassify ML-based medical predictions.

Strong security protocols must therefore be built into them, and into your data governance. You should look to:

  • continuously review your algorithms
  • monitor how they’re trained, and who’s training them
  • track your data sources, and any changes made to your data
  • test your trained system against backdoor hacks.

You’ll need to know whether you have the skills within the business to test your AI systems – and where to find them if not.

4.     Overcome unfair bias

Eliminating unfair bias will require your leaders to understand how your algorithms work; and the attributes in the training data that influence the predictions they make. These attributes must be relevant, appropriate to their aims, and allowed for use, having gone through the appropriate controls.

You must also continuously monitor your algorithms with appropriate feedback loops. And don’t let model training lapse, as model drift is one way that bias can creep in.

To make all this happen, you may need to recruit a dedicated training team – ideally including an AI ethicist. And consider setting up an independent review of any critical models where unfair bias could have an adverse impact on consumers or society.

5.     Enhance transparency

Transparency must underpin everything you do with AI: it’s your contract of trust with your stakeholders.

Transparency over how you’re implementing the guiding principles will reassure your workforce, investors and customers that you’re addressing the biases and security risks AI can introduce. Only then will they genuinely trust your business’s use of AI, ML and big data.

Achieving that transparency goes beyond publishing your AI policies and practices. You must also provide clear, simple messaging, so that stakeholders can easily understand your use of AI.

Our guiding pillars set out how to create policies to drive the right actions and outcomes when using AI, so as to build and maintain trust. Following them will help ensure your AI implementations are successful from both a commercial and ethical standpoint.

The KPMG team has extensive experience of helping businesses deliver value from AI, while managing the risks and ethical issues involved. Please get in touch to discuss how we can help your organisation.

Related Articles