Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

How the CISO can help the business kickstart generative AI projects

Businesses are keen to capture the benefits of generative AI (GenAI). Too often, however, security, privacy and other risk concerns are stalling progress. A robust AI risk review and readiness process can help.

GenAI’s ability to consume and organize vast amounts of information, mimic human understanding, and generate content quickly created enormous expectations for a new spurt of technology-led productivity growth. In a KPMG survey of top executives, more than 70 percent of respondents said they expected to implement a GenAI solution by spring 2024 and more than 80 percent expect the technology to have “significant impact” on their businesses by the end of 2024.

But in many organizations, GenAI plans are stuck or progressing slowly as business units struggle to account for all the risks—security, privacy, reliability, ethical, regulatory, intellectual property, etc.

This is where the Chief Information Security Officer (CISO) and the wider security team can step in and play a critical role. By working with risk, compliance, and legal teams to develop and activate a process to quickly assess and control risks around generative AI models and data sets, the CISO can help enable the business with new GenAI capabilities.

In this article, we look at just some of the ways the CISO can work as part of a cross-functional team to help their organization move forward in GenAI adoption by contributing to early AI risk review and readiness processes.

Cybersecurity and data privacy risks top the GenAI risk management agenda

GenAI presents an array of multi-layered and multi-disciplinary risk issues—in some cases introducing pure black-box unknowns. These challenges will require new depth, expertise, and leadership from the CISO, as well as the wider risk function. 

What risk concerns matter most to corporate leaders? According to our research, risks for which the CISO is largely responsible stand out. In a June 2023 KPMG survey of top leaders, 63 percent said privacy concerns with personal data is a high priority area of risk management, ranking #1 among all GenAI risks. This was followed closely by cybersecurity, at 62 percent of respondents.

Further, the reality we see on the ground when it comes to preparedness to deal with GenAI risks is not aligned to most business leaders’ views. More than three-quarters of respondents in the June survey said they were highly confident in their organization’s ability to address and mitigate risks associated with GenAI. Yet, our experience is that many organizations remain stuck, searching for leadership, consensus, and a rational approach for resolving GenAI risk issues and establishing guardrails for ongoing protection. Cybersecurity is a particular void: A survey conducted by Microsoft found that 89 percent of surveyed companies do not have tools in place to secure their AI systems.1

1Source: Adversary Machine Learning, Redmond, USA (March 2021)

Classifying GenAI risks

To understand how the CISO can help organizations actually achieve a higher level of risk readiness requires a general understanding of the technical risks associated with GenAI. They fall into four buckets:

01
Model context and governance

Context defines the purpose, scope, and ethical framework within which the GenAI model operates. It also defines the required training data and its sources, model architecture, and determines applicable risks such as security, fairness, and sustainability. Governance provides the structure to manage and oversee the model's development and usage.

02
Input data

The quality, relevance, and fairness of input data directly impact the model's effectiveness and ethical hygiene. Using high-quality and diverse data sets in training and fine-tuning GenAI models also can help reduce or prevent hallucinations. Organizations need to ensure that the data sources and pipelines used by GenAI models for training, validation, and inference are free from biases and that they are trustworthy.

03
Output data

GenAI can produce dazzling results—well-argued legal briefs, insightful reports, and in-depth analyses. However, before organizations trust these outputs, they need to ensure that they have robust quality assurance practices—both manual and automated—in place to check and fix results. The quality and completeness of input data, the model logic, and infrastructure effect the outputs of a GenAI model. Without the right measures in place, outputs could be biased, unethical, irrelevant, incoherent, expose sensitive data, or have legal and safety consequences.

04
Model logic and infrastructure

The model logic defines how the model operates, processes data, and generates responses or outputs—with direct influence on the model’s outcomes and decisions. For example, will the AI application be able to “guess” a response or only look for absolute data matches in generating responses? The infrastructure provides the computational resources and environment for execution—including hardware, software and design—and underpins the model’s functionality, performance, reliability, flexibility and scalability.

Get started: Integrate risk management from the first stages of GenAI adoption

Business leaders are looking to generative AI to increase efficiency and enable new sources of growth, and they are impatient to get going so they don’t fall behind. The CISO and the security team—working with other functions—can help shorten the path to GenAI implementation by integrating risk management into the design of AI models.

One essential responsibility of the CISO will be reviewing key security and privacy risk topics before implementation proceeds. After all, numerous risk areas need to be addressed before any GenAI implementation proceeds, including a wide range of cybersecurity and data privacy risks that ultimately fall under the purview of the CISO and security team. As such, the CISO’s voice, knowledge and perspective should be heard during AI risk review.

For example, consider the critical privacy risk issues at each stage of GenAI adoption and the potential controls that can help reduce or mitigate those risks—if, through the CISO’s leadership and contribution, a security and privacy mindset is introduced at the start.
GenAI adoption stagePrivacy riskControl examples
Strategy and design

Privacy considerations not being identified prior to development

Incorporate privacy by design and develop privacy metrics

Data

Violating data holders' privacy rights at collection

Adhere to privacy practices and mask/minimize data at collection

Modeling

Deviating from the organization's privacy principles and commitments

Develop models that enable and adhere to privacy principles

Evaluation

Privacy practice implementation errors

Evaluate against privacy metrics

Deployment and optimization

Violating international privacy regulations

Monitor against regulatory changes and privacy metrics

Keep going: Contribute security expertise to the GenAI taskforce

To expedite reviews and approvals on these and other diverse risk topics, we also recommend companies establish a multidisciplinary GenAI taskforce—and give the CISO a central role.

When a business unit or function proposes a generative AI implementation, each team on the taskforce should bring their unique information and perspective to bear to help solve the relevant challenges that arise at each stage. For example, finance should assess its potential payoff and strategic priority; IT should determine how it can be implemented, including choosing vendors, partners and models; and risk should develop repeatable governance and review processes.

For their part, the security team should contribute to the taskforce by empowering the business to assess the AI ecosystem, secure their critical models, and respond to adversarial attacks. Key actions may be analyzing existing AI security policies, procedures, configurations and response plans and identifying gaps and areas for improvement, as well as designing, developing and delivering a security strategy, framework and solutions for securing AI systems and models. 

Explore more

It’s clear the CISO and security team will play a crucial role in GenAI governance, management and monitoring, helping enabling businesses to adapt to the challenges of the EU AI Act and accelerate value from this exciting emerging technology. For further insights and advice for CISOs on GenAI deployment and operations, check out our latest thought leadership.

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline