In AI we trust?

In AI we trust?

Why assurance is more important than ever in the age of machines.


Author: Professor Sander Klous , Data & Analytics Leader, the Netherlands, KPMG International, KPMG in the Netherlands

Imagine you are a parent whose child is applying to get into school. You'd want your child to have the best possible chance of being accepted at the school of your choice, right? Now, what if you knew that the decision of where to place your child was being made by artificial intelligence (AI). Would you trust an algorithm to have your child's best interests at heart?

That's exactly the scenario parents in one of the major Dutch cities have encountered ever since the school system embraced AI to create a more equitable and evenly distributed student allocation system. The algorithm has been designed to prevent the oversubscription of popular schools while providing the overall best result for all children - but how can you prove it works accurately and ethically?
It was a challenge we embraced when we were asked to create a model of assurance that would give peace of mind to administrators and parents alike that the algorithm was functioning fairly so all affected could have faith in the system.

Of course, trust has long been a defining factor in an organization's success or failure - underpinning reputation, customer satisfaction, loyalty and shareholder value. Increasingly though, with the widespread adoption of data analysis in general and more specifically of AI throughout business, machines and algorithms have become a significant part of the trust equation.

That poses serious questions for all organizations because, in the technological rush to gain a competitive advantage through AI, companies might be prepared to take on higher levels of risk even though the data and algorithms they depend on are becoming more and more complex and opaque. This could lead to cases where for example, executives are being asked to make major decisions based on the output of an algorithm that they didn't create and don't fully understand.

The Wild West approach to data and analytics needs to evolve into a mature process, and quickly, for companies to maintain trust in how they do business. So far, that trust is in pretty short supply even within companies themselves. According to KPMG's recent Guardians of Trust report - a survey of 2,200 global information technology (IT) and business decision makers involved in strategy for data initiatives - just 35 percent had a high level of trust in their own organization's analytics.1

For AI to be truly transformative we must have confidence in how it functions. That's why a comprehensive assurance model for AI is so important - one that builds trust through guaranteeing that algorithms are reliable, the system is cyber secure, IT processes and controls are properly implemented, appropriate data management is in place and that there is a governance structure that understands the ethics of machine learning. That understanding is subsequently included in the management of a wide range of organizational risks, like the potential impact of failure on financial results or reputation.

When you consider this assurance model, auditing AI is not all that different from auditing financial statements. The same principles and good practices apply - such as the three lines of defense and the impact of potential mistakes (materiality). And just as with financial statements public interest should be the highest priority of the auditor as well as a far-reaching willingness to be transparent and to cooperate closely with national and international regulatory bodies. As always, the auditor is accountable to the general public, as well as to regulators and the corporate sector.

Ultimately, the governance of machines shouldn't be fundamentally different from the governance of humans and it should be integrated into the structure of the entire enterprise. That way, hopefully those affected by AI decision-making will have as much belief in the system as the Dutch parents whose children will have a more equitable chance of school selection thanks to an independently audited algorithm.

Sanzhar Shaimerdenov, Senior Consultant of IT Advisory, KPMG in Kazakhstan and Central Asia, commented: “Companies operating in many different industries of Kazakhstan and especially those engaged in mass market services understand that to retain their market shares it is not possible just expand their branch networks, thus extending a coverage area, but it is necessary to understand their customers, interests and paying capacity thereof, devices and products already purchased, and respond rapidly to changes in their needs. To have a clear and transparent picture of their customers’ profile and segment intelligently the heterogeneous groups, it took many companies several years just to structure the existing data, identify the key points and assess accuracy of the existing records. It is really good that to determine the proper strategy in development of the customer data analytics there is an excellent opportunity to look at the Western market with its experience in addressing similar tasks and availability of a huge number of out-of-the-box solutions and suppliers.

Up to date only the most advanced Kazakhstani banks and telecom operators are completing the “Wild West” stage and big data extensions, where the high-priority task is the detailed recording of a user history and search for additional sources of new customer data.

The second stage is to sort data in the convenient and accessible format, which is usually tailored for specific business tasks, and picked up by a simple and understandable model of customer behavior assessment. However, such models have a few key drawbacks - a limited simplified format of data used and need in regular calibration of parameters. Intellectual algorithms created on the basis of machine learning are capable of experimenting, by themselves, with extended data amount, changing themselves depending on the results and shifting their own priorities over time for new customers and with new input parameters.

Although for the neuron nets and other artificial intelligence algorithms it is not possible to understand completely what it has been guided by when performing its task, as distinct from a human being, all actions, assigned scores and input parameters will be saved in an accessible form. Therefore, it is necessary to abandon the previous paradigm where we could easily look into the insides of the models being adjusted, and learn to pilot the cutting-edge technologies, create the internal control procedures and evaluate performance of a final business task.”

1 Guardians of trust : Who is responsible for trusted analytics in the digital age? 

© 2024 KPMG. KPMG Audit LLC, KPMG Tax and Advisory LLC and KPMG Valuation LLC, companies incorporated under the Laws of the Republic of Kazakhstan, member firms of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. All rights reserved.

For more detail about the structure of the KPMG global organization please visit

Connect with us