Building trusted AI in financial services
Concerned about risks and biases, a leading US financial services company called on KPMG to help improve the reliability of its artificial intelligence-driven outcomes.
Building trusted AI in financial services
Concerned about risks and biases, a leading US financial services company called on KPMG to help improve the reliability of its artificial intelligence-driven outcomes.
Businesses are moving forward with their artificial intelligence (AI) implementation, but a high priority is ensuring its proper use. In KPMG’s 2024 US Technology Survey , 41 percent of global respondents and 38 percent of US respondents said that their AI deployment includes continually developing AI governance policies for ethical and fair use.
In light of those concerns, a leading US financial services company wanted to address the growing risks associated with its artificial intelligence (AI) and machine learning (ML) systems. AI algorithms are designed by humans and are therefore vulnerable to human limitations, including errors, omissions and biases. By implementing KPMG Trusted AI, our strategic approach and framework to designing, building, deploying, and using AI solutions in a responsible and ethical manner, the company was able to mitigate those risks and establish trust in its AI-driven decisions.
Our client recognized the growing risks associated with its AI and ML systems. To help establish trust in its AI-driven decisions that could unwittingly create ethical, legal or compliance issues, it turned to KPMG.
In addition to helping our client design and implement a responsible AI program, we developed a proof of concept (POC) for the processes and technologies it would need to validate its AI/ML models and the data on which they rely.
Beyond the technology, we helped the client make the business case for centralizing its AI/ML governance, operating model and solutions, which had been scattered across multiple business units, by providing evidence of the risk and vulnerabilities that could be detected and prevented with a more complete responsible AI solution.
On this episode we explore how AI bias can perpetuate “digital redlining,” which puts certain groups at a disadvantage when using technology, and how adopting better governance can help build more trusted AI.
While many think of computers as accurate, neutral, unemotional and purely logical machines, the human-designed AI algorithms they run are susceptible to error and bias, especially if the data they rely on is incomplete or contains inherent biases.
Financial services companies are subject to a significant number of regulatory requirements and compliance mandates. Avoiding unintended biases with age or race in decisions about extending credit or determining credit limits, for example, is key to meeting those mandates. Investment decisions, of course, can also have significant financial consequences. Therefore, it’s essential to have confidence in any decision that AI makes by ensuring that such decisions are free from error or bias.
Understanding, detecting and preventing such errors or biases, however, can be remarkably challenging. There’s often a lack of transparency into how AI is making its decisions — indeed, one of the reasons we turn to AI is precisely because it can detect patterns amid chaos that humans are incapable of seeing or even understanding.
Responsible AI is the practice of bringing transparency, accountability and trust to AI. Our goal was to help our client stand up an effective and efficient responsible AI program, powered by the technology required to validate its AI/ML models and data to support longer term scale.
The challenge in this case was that each of the institution’s five separate business units had already implemented their own AI/ML solutions. But without any formal or centralized responsible AI program, or technology specifically designed to detect AI/ML model or data errors or biases, the client was at risk.
Beyond defining the business rules and requirements, our task was to create a proof of concept (POC) and approach for a validation solution that would work with this centralized platform. Additionally, we were asked to evaluate the AI/ML models and data sets being used by the separate business units and the controls surrounding them, with the goal of helping to illustrate why centralizing the AI/ML platform could help to mitigate risks that these independent systems may be vulnerable to.
It combines our team of highly skilled AI technology and data science professionals, our advanced and proprietary AI tools and accelerators, our extensive experience with leading AI solutions, our strong industry alliances, and our long-standing experience in governance, risk and compliance (GRC) to form a remarkably broad solution. It’s designed to help organizations stand up a responsible AI program and build and evaluate sound AI and ML models to establish trust in the output of those models.
Beyond the technology used to validate AI/ML models and data, successful responsible AI programs require well-designed operating models and processes that reflect leading GRC practices. We helped define key performance indicators (KPIs) and other metrics that would provide benchmarks for AI/ML model testing and validation.
We also helped stand up an independent AI/ML operations function within the organization. In the same way a firm can’t audit itself, or a software engineer can’t validate that his or her own code is free of bugs or vulnerabilities, an independent validator is required to handle AI-related technologies and deployments.
This included custom Python scripts, open source libraries and other tools designed for this purpose.
We worked with the client’s preferred AI/ML platform solution. This cloud-based AI workbench is designed to help data scientists and developers prepare, build, train and deploy high-quality machine learning models quickly by bringing together a broad set of purpose-built capabilities. However, by default, it’s not configured with any validation capabilities.
We helped develop the scripts for the platform that would enable the client to validate any AI or ML algorithms and the data on which they depend — essentially, “built-in” validation instead of using a separate add-on solution that typically evaluates only the output of those algorithms.
Our validation customizations were designed to help the firm quickly and easily detect, diagnose and correct problems that can arise with AI/ML models, including both internally and externally developed and trained models. It enabled our client to experiment with its models — providing “levers” they can pull to conduct what-if experiments to help determine the effect various parameters or assumptions have on the models that rely on them, and ultimately a way to quantify errors or biases in those models.
We also reverse-engineered several proprietary third-party models to help our client gain insights into those models that were otherwise a black box.
Our team consisted of three data science and analytics professionals skilled in the development of AI models, three risk and compliance professionals with specific knowledge and experience in financial services, and a project lead.
Further, KPMG helped the client define the ideal balance of controls for decisions that are made by its AI/ML solutions. Too many or overly strict controls that often aren’t specific to a particular industry or data usage can create undue burdens that can encourage firms to sidestep such controls, which of course can then put the company at risk. We were able to help it see and understand the most relevant AI/ML benchmarks that have been set by others in the same industry.
One of the challenges with implementing a responsible AI program is that evaluating AI systems for error or bias often requires more technical skills and experience with AI than were required to develop the AI solutions in the first place. It goes beyond the skills required for many other governance, risk and compliance (GRC) efforts, and includes the ability not only to understand what AI systems are doing, but also to reengineer them when the algorithms are proprietary and therefore inaccessible for direct examination.
Stay ahead with KPMG's technology services, designed to drive innovation in areas like AI, analytics, cybersecurity, blockchain, and cloud. Our more than 15,000 tech experts collaborate with CIOs, CEOs, and CTOs to implement technology strategies that yield sustainable business value.
Our service, KPMG Trusted AI, can help organizations implement appropriate governance, policies, and controls, organizations can achieve a balanced approach that is bold, efficient, and responsible, thereby enhancing the value of AI with confidence.
Learn more about KPMG Trusted AI and our other technology solutions here.
Our professionals immerse themselves in your organization, applying industry knowledge, powerful solutions and innovative technology to deliver sustainable results. Whether it’s helping you lead an ESG integration, risk mitigation or digital transformation, KPMG creates tailored data-driven solutions that help you deliver value, drive innovation and build stakeholder trust.