Businesses are keen to capture the benefits of generative AI (GenAI). Too often, however, security, privacy and other risk concerns are stalling progress. A robust AI risk review and readiness process can help.
GenAI’s ability to consume and organize vast amounts of information, mimic human understanding, and generate content quickly created enormous expectations for a new spurt of technology-led productivity growth. In a KPMG survey of top executives, more than 70 percent of respondents said they expected to implement a GenAI solution by spring 2024 and more than 80 percent expect the technology to have “significant impact” on their businesses by the end of 2024.
But in many organizations, GenAI plans are stuck or progressing slowly as business units struggle to account for all the risks—security, privacy, reliability, ethical, regulatory, intellectual property, etc.
This is where the Chief Information Security Officer (CISO) and the wider security team can step in and play a critical role. By working with risk, compliance, and legal teams to develop and activate a process to quickly assess and control risks around generative AI models and data sets, the CISO can help enable the business with new GenAI capabilities.
In this article, we look at just some of the ways the CISO can work as part of a cross-functional team to help their organization move forward in GenAI adoption by contributing to early AI risk review and readiness processes.
GenAI presents an array of multi-layered and multi-disciplinary risk issues—in some cases introducing pure black-box unknowns. These challenges will require new depth, expertise, and leadership from the CISO, as well as the wider risk function.
What risk concerns matter most to corporate leaders? According to our research, risks for which the CISO is largely responsible stand out. In a June 2023 KPMG survey of top leaders, 63 percent said privacy concerns with personal data is a high priority area of risk management, ranking #1 among all GenAI risks. This was followed closely by cybersecurity, at 62 percent of respondents.
Further, the reality we see on the ground when it comes to preparedness to deal with GenAI risks is not aligned to most business leaders’ views. More than three-quarters of respondents in the June survey said they were highly confident in their organization’s ability to address and mitigate risks associated with GenAI. Yet, our experience is that many organizations remain stuck, searching for leadership, consensus, and a rational approach for resolving GenAI risk issues and establishing guardrails for ongoing protection. Cybersecurity is a particular void: A survey conducted by Microsoft found that 89 percent of surveyed companies do not have tools in place to secure their AI systems.1
To understand how the CISO can help organizations actually achieve a higher level of risk readiness requires a general understanding of the technical risks associated with GenAI. They fall into four buckets:
Context defines the purpose, scope, and ethical framework within which the GenAI model operates. It also defines the required training data and its sources, model architecture, and determines applicable risks such as security, fairness, and sustainability. Governance provides the structure to manage and oversee the model's development and usage.
The quality, relevance, and fairness of input data directly impact the model's effectiveness and ethical hygiene. Using high-quality and diverse data sets in training and fine-tuning GenAI models also can help reduce or prevent hallucinations. Organizations need to ensure that the data sources and pipelines used by GenAI models for training, validation, and inference are free from biases and that they are trustworthy.
GenAI can produce dazzling results—well-argued legal briefs, insightful reports, and in-depth analyses. However, before organizations trust these outputs, they need to ensure that they have robust quality assurance practices—both manual and automated—in place to check and fix results. The quality and completeness of input data, the model logic, and infrastructure effect the outputs of a GenAI model. Without the right measures in place, outputs could be biased, unethical, irrelevant, incoherent, expose sensitive data, or have legal and safety consequences.
The model logic defines how the model operates, processes data, and generates responses or outputs—with direct influence on the model’s outcomes and decisions. For example, will the AI application be able to “guess” a response or only look for absolute data matches in generating responses? The infrastructure provides the computational resources and environment for execution—including hardware, software and design—and underpins the model’s functionality, performance, reliability, flexibility and scalability.
Business leaders are looking to generative AI to increase efficiency and enable new sources of growth, and they are impatient to get going so they don’t fall behind. The CISO and the security team—working with other functions—can help shorten the path to GenAI implementation by integrating risk management into the design of AI models.
One essential responsibility of the CISO will be reviewing key security and privacy risk topics before implementation proceeds. After all, numerous risk areas need to be addressed before any GenAI implementation proceeds, including a wide range of cybersecurity and data privacy risks that ultimately fall under the purview of the CISO and security team. As such, the CISO’s voice, knowledge and perspective should be heard during AI risk review.
To expedite reviews and approvals on these and other diverse risk topics, we also recommend companies establish a multidisciplinary GenAI taskforce—and give the CISO a central role.
When a business unit or function proposes a generative AI implementation, each team on the taskforce should bring their unique information and perspective to bear to help solve the relevant challenges that arise at each stage. For example, finance should assess its potential payoff and strategic priority; IT should determine how it can be implemented, including choosing vendors, partners and models; and risk should develop repeatable governance and review processes.
For their part, the security team should contribute to the taskforce by empowering the business to assess the AI ecosystem, secure their critical models, and respond to adversarial attacks. Key actions may be analyzing existing AI security policies, procedures, configurations and response plans and identifying gaps and areas for improvement, as well as designing, developing and delivering a security strategy, framework and solutions for securing AI systems and models.
It’s clear the CISO and security team will play a crucial role in GenAI governance, management and monitoring, helping enabling businesses to adapt to the challenges of the EU AI Act and accelerate value from this exciting emerging technology. For further insights and advice for CISOs on GenAI deployment and operations, check out our latest thought leadership.
KPMG generative AI survey report: Cybersecurity
An exclusive KPMG survey examines four areas where this remarkable technology shows great promise.
How the EU AI Act affects US-based companies
A guide for CISOs and other business leaders
Is my AI secure?
Understanding the cyber risks of artificial intelligence to your business
What your AI Threat Matrix Says about your Organization
Ready, Set, Threat