Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Solved: CISOs address 4 AI security threats to businesses

The KPMG Trusted AI approach helps CISOs adopt AI solutions securely and responsibly

Nearly every organization in every industry is exploring artificial intelligence (AI) opportunities, especially in the red-hot category of generative AI (GenAI). In a recent KPMG survey, 97 percent of business executives said they would be investing in GenAI in the next 12 months.1

However, with these opportunities come a growing number of serious risks. CISOs need to get up to speed and understand this rapidly evolving technology so they can better assess, detect, and mitigate AI-related risks and adopt AI solutions in a responsible manner.

1Source: “Generative AI Consumer TRUST Survey,” KPMG, January 2024

Let’s look at four major security risks and how a KPMG Trusted AI approach can help CISOs mitigate risks and improve security.

01
Intellectual property theft

02
Employee misuse

03
Inaccurate results and reputational damage

04
Deepfakes

CHALLENGE 1

Intellectual property (IP) theft

The result
AI solutions are designed from the start to protect intellectual property and comply with applicable privacy and data protection laws and regulations. Meanwhile, policies and controls embedded into the organization’s processes help prevent users of AI do not accidently expose company IP.

Before:

GenAI technology uses neural networks that are trained on large existing data sets to create new data or objects like text, images, audio or video.2 Various users input data to the network, which is then used to answer other users’ prompts. This process potentially exposes private, sensitive or proprietary information to the public, including bad actors.

In addition, GenAI content created according to an organization’s prompts could potentially contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of lawsuits involving charges of copyright infringement.

 2Source: “Unlock the Potential of Generative AI: A Guide for Tech Leaders,” Forbes.com. January 26, 2023

After:

CISOs work with KPMG—using our Trusted AI framework—to carefully consider how data is used for AI training and public consumption. Data can be de-identified (where explicit identifiers are hidden or removed) and/or anonymized (where the data cannot be linked in any way to identify individuals).

Challenge 2

Employee misuse

The result
AI solutions are developed to help ensure that employees use AI in accordance with their intended purpose and scope, reducing the risk of a wide range of ethical, legal and financial issues that could stem deliberate misuse of the company’s AI systems.

Before:

Many AI applications are easy to set up, learn and operate, but that also opens the door to misuse by employees. According to a 2024 business survey, 84 percent of workers who use generative AI at work said they have publicly exposed their company’s data in the last three months.3

Some data exposure is by accident, but other cases involve the deliberate intent to defraud or deceive. For example, employees might use GenAI to create reports, presentations or analyses and pass off the result as their own. Contract workers might pass off GenAI work as their own and bill the company for hours of work they didn’t in fact perform. More seriously, employees could use GenAI to automate legal confirmations or reviews that may skirt appropriate ethics and regulatory compliance requirements.

3 Source: “3 ways companies can mitigate the risk of AI in the workplace,” World Economic Forum, January 16, 2024

After:

Trusted AI technology and services from KPMG are used to develop the infrastructure and tools to monitor AI use by employees. KPMG can also work with CISOs and their colleagues to develop safe usage guidelines for AI applications and information security policies.

Challenge 3

Inaccurate results and reputational damage

The result
AI solutions are based on accurate, timely, and appropriate content, and there is transparency in how AI results are generated and used. Whether to increase productivity or open new avenues of value creation, businesses can use AI applications more confidently when they trust in the quality of the outputs.

Before:

Even the legitimate use of AI carries risks. If GenAI content contains inaccuracies, it can cause any number of failures that could impact business outcomes or create liability issues for the business. Lack of transparency in the creation and use of GenAI content can also create reputational issues for organizations.

Other risks around generative AI include perpetuating or amplifying societal biases that may be present in the data used to train the tool. The technology can potentially locate sensitive information, such as personal data, that could be used for identity theft or the invasion of privacy. Disgruntled employees or angry customers could create fictitious material that could malign the company’s reputation or that of one of its employees. 

After:

With the help of KPMG Trusted AI solutions, content used in AI solutions is acquired in compliance with applicable laws and regulations and assessed for accuracy, completeness, appropriateness, and quality. In addition, AI solutions are designed to reduce or eliminate bias against individuals, communities, and groups.

Challenge 4

Deepfakes

The result
Robust, leading-edge cyber security tools, processes and culture help mitigate risks from deepfakes and other malicious content that could seriously damage a company’s security and reputation.

Before:

Bad actors from outside the organization can use GenAI to create so-called deepfake images or videos with uncanny realism and without any forensic traces left behind in edited digital media. This makes them extremely difficult for humans or even machines to detect.

A deepfake image could be created depicting a company employee in a scandalous situation. An individual could use GenAI to create fake images or videos and use them to file fraudulent insurance claims. In addition, cybercriminals can use AI technology to create more realistic and sophisticated phishing scams or credentials to hack into systems.

After:

KPMG professionals and CISOs fight deepfakes by implementing a zero-trust model that involves multi-factor authentication, behavioral biometrics, single sign-on, password management, and privileged access management. 

How we help

Integrated capabilities for CISOs

KPMG is a leader in helping manage risk associated with GenAI solutions. We rank No 1. for quality of AI advice and implementation by the analyst group Source.

The KPMG Trusted AI approach underpins our broad suite of services and solutions for helping CISOs and other leaders manage AI risks seize opportunities of AI with confidence.

KPMG AI Security Services

A leading AI security and trust service provider which develops and delivers effective security solutions for AI systems and models across multiple industries

Deep experience

Deep experience in regulations, risk, security, privacy, and other critical areas that can prove beneficial in the fast-emerging space of trusted AI

Strategic alliances

A powerful network of strategic alliances and investments that enhance our capabilities and help our clients seize more value from strategy and technology investments

Cyber center of excellence (COE)

Securing AI to help support the development of our AI security framework

KPMG ranks #1 for quality AI advice and implementation in the US

According to senior buyers of consulting services who participated in the Source study, Perceptions of Consulting in the US in 2024, KPMG ranked No. 1 for quality in AI advice and implementation services. 

Learn more >

Take a deeper dive into our cybersecurity insights

Access the latest KPMG insights to learn valuable facts, trends and guidance for CISOs about navigating the complexities of AI risk and innovation.

How KPMG AI Security and AI Trust Services can help

Our AI security professionals tailor the approach to meet the requirements, platforms, and capabilities of different organizations to deliver an effective and accepted security strategy. Consideration of current and upcoming regulations and frameworks underpins all of our solutions.

About KPMG AI Security Services

Service
AI security framework design
KPMG AI Security Services

KPMG AI Security Services is a core Trusted AI capability that helps organizations secure their most critical AI systems with a technology-enabled, risk-based approach. Powered by a proprietary solution created in the KPMG Studio under the auspices of our AI security spinoff Cranium, we help organizations develop and deliver effective security for AI systems and models. 

Our AI security framework design provides security teams with a playbook to:

  • Proactively assess AI systems in development and production environments
  • Secure AI systems against threats such as backdoor attacks and model inversion
  • Respond effectively in the event of an attack. 

About KPMG AI Trust Services

Service
AI Trust services
Unlock the vast potential of artificial intelligence with a trusted approach.

Trusted AI is our strategic framework and suite of services and solutions to help organizations embed trust in every step of the AI lifecycle. We combine deep industry experience and modern technical skills to help businesses harness the power of AI in a trusted manner—from strategy to design through to implementation and ongoing operations. 

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline