The KPMG Trusted AI approach helps CISOs adopt AI solutions securely and responsibly
Nearly every organization in every industry is exploring artificial intelligence (AI) opportunities, especially in the red-hot category of generative AI (GenAI). In a recent KPMG survey, 97 percent of business executives said they would be investing in GenAI in the next 12 months.1
However, with these opportunities come a growing number of serious risks. CISOs need to get up to speed and understand this rapidly evolving technology so they can better assess, detect, and mitigate AI-related risks and adopt AI solutions in a responsible manner.
1Source: “Generative AI Consumer TRUST Survey,” KPMG, January 2024
GenAI technology uses neural networks that are trained on large existing data sets to create new data or objects like text, images, audio or video.2 Various users input data to the network, which is then used to answer other users’ prompts. This process potentially exposes private, sensitive or proprietary information to the public, including bad actors.
In addition, GenAI content created according to an organization’s prompts could potentially contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of lawsuits involving charges of copyright infringement.
2Source: “Unlock the Potential of Generative AI: A Guide for Tech Leaders,” Forbes.com. January 26, 2023
After:
CISOs work with KPMG—using our Trusted AI framework—to carefully consider how data is used for AI training and public consumption. Data can be de-identified (where explicit identifiers are hidden or removed) and/or anonymized (where the data cannot be linked in any way to identify individuals).
Before:
Many AI applications are easy to set up, learn and operate, but that also opens the door to misuse by employees. According to a 2024 business survey, 84 percent of workers who use generative AI at work said they have publicly exposed their company’s data in the last three months.3
Some data exposure is by accident, but other cases involve the deliberate intent to defraud or deceive. For example, employees might use GenAI to create reports, presentations or analyses and pass off the result as their own. Contract workers might pass off GenAI work as their own and bill the company for hours of work they didn’t in fact perform. More seriously, employees could use GenAI to automate legal confirmations or reviews that may skirt appropriate ethics and regulatory compliance requirements.
3 Source: “3 ways companies can mitigate the risk of AI in the workplace,” World Economic Forum, January 16, 2024
After:
Trusted AI technology and services from KPMG are used to develop the infrastructure and tools to monitor AI use by employees. KPMG can also work with CISOs and their colleagues to develop safe usage guidelines for AI applications and information security policies.
Before:
Even the legitimate use of AI carries risks. If GenAI content contains inaccuracies, it can cause any number of failures that could impact business outcomes or create liability issues for the business. Lack of transparency in the creation and use of GenAI content can also create reputational issues for organizations.
Other risks around generative AI include perpetuating or amplifying societal biases that may be present in the data used to train the tool. The technology can potentially locate sensitive information, such as personal data, that could be used for identity theft or the invasion of privacy. Disgruntled employees or angry customers could create fictitious material that could malign the company’s reputation or that of one of its employees.
After:
With the help of KPMG Trusted AI solutions, content used in AI solutions is acquired in compliance with applicable laws and regulations and assessed for accuracy, completeness, appropriateness, and quality. In addition, AI solutions are designed to reduce or eliminate bias against individuals, communities, and groups.
Before:
Bad actors from outside the organization can use GenAI to create so-called deepfake images or videos with uncanny realism and without any forensic traces left behind in edited digital media. This makes them extremely difficult for humans or even machines to detect.
A deepfake image could be created depicting a company employee in a scandalous situation. An individual could use GenAI to create fake images or videos and use them to file fraudulent insurance claims. In addition, cybercriminals can use AI technology to create more realistic and sophisticated phishing scams or credentials to hack into systems.
After:
KPMG professionals and CISOs fight deepfakes by implementing a zero-trust model that involves multi-factor authentication, behavioral biometrics, single sign-on, password management, and privileged access management.
How we help
Integrated capabilities for CISOs
KPMG is a leader in helping manage risk associated with GenAI solutions. We rank No 1. for quality of AI advice and implementation by the analyst group Source.
The KPMG Trusted AI approach underpins our broad suite of services and solutions for helping CISOs and other leaders manage AI risks seize opportunities of AI with confidence.
KPMG AI Security Services
A leading AI security and trust service provider which develops and delivers effective security solutions for AI systems and models across multiple industries
Deep experience
Deep experience in regulations, risk, security, privacy, and other critical areas that can prove beneficial in the fast-emerging space of trusted AI
Strategic alliances
A powerful network of strategic alliances and investments that enhance our capabilities and help our clients seize more value from strategy and technology investments
Cyber center of excellence (COE)
Securing AI to help support the development of our AI security framework
According to senior buyers of consulting services who participated in the Source study, Perceptions of Consulting in the US in 2024, KPMG ranked No. 1 for quality in AI advice and implementation services.
Access the latest KPMG insights to learn valuable facts, trends and guidance for CISOs about navigating the complexities of AI risk and innovation.
What your AI Threat Matrix Says about your Organization
Ready, Set, Threat
Fake content is becoming a real problem
Widespread availability of sophisticated computing technology and AI enables virtually anyone to create highly realistic fake content.
Is my AI secure?
Understanding the cyber risks of artificial intelligence to your business
Our AI security professionals tailor the approach to meet the requirements, platforms, and capabilities of different organizations to deliver an effective and accepted security strategy. Consideration of current and upcoming regulations and frameworks underpins all of our solutions.
KPMG AI Security Services is a core Trusted AI capability that helps organizations secure their most critical AI systems with a technology-enabled, risk-based approach. Powered by a proprietary solution created in the KPMG Studio under the auspices of our AI security spinoff Cranium, we help organizations develop and deliver effective security for AI systems and models.
Our AI security framework design provides security teams with a playbook to:
Trusted AI is our strategic framework and suite of services and solutions to help organizations embed trust in every step of the AI lifecycle. We combine deep industry experience and modern technical skills to help businesses harness the power of AI in a trusted manner—from strategy to design through to implementation and ongoing operations.
Connect with our experienced AI security professionals to learn how our integrated capabilities can help your business manage AI risks and seize AI opportunities.