Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

AI in Control within Financial Services

A leading US financial services company called on KPMG to help establish trust in its AI-driven decisions.

Establishing confidence in AI-driven decisions

A leading US financial services company recognized the growing risks associated with its artificial intelligence (AI) and machine learning (ML) systems. AI algorithms are designed by humans and are therefore vulnerable to human limitations, including errors, omissions and biases. To help establish trust in its AI-driven decisions, it called on KPMG.

Challenge: Building trust

Our client recognized the growing risks associated with its AI and ML systems. To help establish trust in its AI-driven decisions that could unwittingly create ethical, legal or compliance issues, it turned to KPMG.

Solution: Adapting technology

In addition to helping our client design and implement a responsible AI program, we developed a proof of concept (POC) for the processes and technologies it would need to validate its AI/ML models and the data on which they rely.

Results: A reason for change

Beyond the technology, we helped the client make the business case for centralizing its AI/ML governance, operating model and solutions, which had been scattered across multiple business units, by providing evidence of the risk and vulnerabilities that could be detected and prevented with a more complete responsible AI solution.

Responsible AI: injecting ethics into algorithms

Speed to Modern Tech Podcast Series, Episode 4

On this episode we explore how AI bias can perpetuate “redlining,” and how adopting better governance can help build more responsible AI.

Enabling technologies

  • Python for custom data and algorithm validation scripts
  • The client’s cloud-enabled AI workbench
  • Reporting and analytics

The challenge: to err is human

As organizations hand over more and more of their business decisions to AI, there’s a growing concern about the ethical, legal and compliance implications.

While many think of computers as accurate, neutral, unemotional and purely logical machines, the human-designed AI algorithms they run are susceptible to error and bias, especially if the data they rely on is incomplete or contains inherent biases.

Financial services companies are subject to a significant number of regulatory requirements and compliance mandates. Avoiding unintended biases with age or race in decisions about extending credit or determining credit limits, for example, is key to meeting those mandates. Investment decisions, of course, can also have significant financial consequences. Therefore, it’s essential to have confidence in any decision that AI makes by ensuring that such decisions are free from error or bias.

Understanding, detecting and preventing such errors or biases, however, can be remarkably challenging. There’s often a lack of transparency into how AI is making its decisions — indeed, one of the reasons we turn to AI is precisely because it can detect patterns amid chaos that humans are incapable of seeing or even understanding.

Responsible AI is the practice of bringing transparency, accountability and trust to AI. Our goal was to help our client stand up an effective and efficient responsible AI program, powered by the technology required to validate its AI/ML models and data to support longer term scale.

The challenge in this case was that each of the institution’s five separate business units had already implemented their own AI/ML solutions. But without any formal or centralized responsible AI program, or technology specifically designed to detect AI/ML model or data errors or biases, the client was at risk.

Requirements

The institution’s IT organization wished to provide a single, centralized AI/ML platform that could be used by the firm’s five separate business units — for example, consumer products and institutional products.

Beyond defining the business rules and requirements, our task was to create a proof of concept (POC) and approach for a validation solution that would work with this centralized platform. Additionally, we were asked to evaluate the AI/ML models and data sets being used by the separate business units and the controls surrounding them, with the goal of helping to illustrate why centralizing the AI/ML platform could help to mitigate risks that these independent systems may be vulnerable to.

Our response

KPMG AI in Control is our responsible AI solution.

It combines our team of highly skilled AI technology and data science professionals, our advanced and proprietary AI tools and accelerators, our extensive experience with leading AI solutions, our strong industry alliances, and our long-standing experience in governance, risk and compliance (GRC) to form a remarkably broad solution. It’s designed to help organizations stand up a responsible AI program and build and evaluate sound AI and ML models to establish trust in the output of those models.

Beyond the technology used to validate AI/ML models and data, successful responsible AI programs require well-designed operating models and processes that reflect leading GRC practices. We helped define key performance indicators (KPIs) and other metrics that would provide benchmarks for AI/ML model testing and validation.

We also helped stand up an independent AI/ML operations function within the organization. In the same way a firm can’t audit itself, or a software engineer can’t validate that his or her own code is free of bugs or vulnerabilities, an independent validator is required to handle AI-related technologies and deployments.

The technology

We began by designing and programming the software that could be used to validate both the data being used and the output of the firm’s AI models based on that data.

This included custom Python scripts, open source libraries and other tools designed for this purpose.

We worked with the client’s preferred AI/ML platform solution. This cloud-based AI workbench is designed to help data scientists and developers prepare, build, train and deploy high-quality machine learning models quickly by bringing together a broad set of purpose-built capabilities. However, by default, it’s not configured with any validation capabilities.

We helped develop the scripts for the platform that would enable the client to validate any AI or ML algorithms and the data on which they depend — essentially, “built-in” validation instead of using a separate add-on solution that typically evaluates only the output of those algorithms.

Our validation customizations were designed to help the firm quickly and easily detect, diagnose and correct problems that can arise with AI/ML models, including both internally and externally developed and trained models. It enabled our client to experiment with its models — providing “levers” they can pull to conduct what-if experiments to help determine the effect various parameters or assumptions have on the models that rely on them, and ultimately a way to quantify errors or biases in those models.

We also reverse-engineered several proprietary third-party models to help our client gain insights into those models that were otherwise a black box.

The team

One of our firm’s strengths is our ability to bring together highly-skilled professionals from a wide-range of disciplines.

Our team consisted of three data science and analytics professionals skilled in the development of AI models, three risk and compliance professionals with specific knowledge and experience in financial services, and a project lead.

Results

Armed with a powerful POC, our client was able to make the business case for centralizing its AI/ML solutions. It was able to demonstrate weaknesses and vulnerabilities in the systems that its discrete business units had been using independently.

Further, KPMG helped the client define the ideal balance of controls for decisions that are made by its AI/ML solutions. Too many or overly strict controls that often aren’t specific to a particular industry or data usage can create undue burdens that can encourage firms to sidestep such controls, which of course can then put the company at risk. We were able to help it see and understand the most relevant AI/ML benchmarks that have been set by others in the same industry.

Why KPMG?

With our deep experience and extensive skills in AI, combined with our strong history in GRC, KPMG was the ideal choice.

One of the challenges with implementing a responsible AI program is that evaluating AI systems for error or bias often requires more technical skills and experience with AI than were required to develop the AI solutions in the first place. It goes beyond the skills required for many other governance, risk and compliance (GRC) efforts, and includes the ability not only to understand what AI systems are doing, but also to reengineer them when the algorithms are proprietary and therefore inaccessible for direct examination.

Speed to Modern Technology

Over the last dozen-plus years, we’ve built a leading technology organization designed specifically to help information technology leaders succeed at the pace business now demands.

Unlike business-only consultancies, our more than 15,000 technology professionals have the resources, engineering experience, battle-tested tools and close alliances with leading technology providers to deliver on your vision — quickly, efficiently and reliably. And unlike technology-only firms, we have the business credentials and sector experience to help you deliver measurable business results, not just blinking lights.

Explore other services tailored to your business

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline