CISOs and their security teams help manage AI risk and unleash AI value.
In the rapidly evolving landscape of artificial intelligence (AI) and generative artificial intelligence (GenAI), the role of the Chief Information Security Officer (CISO) is critical. As organizations race to integrate AI across the enterprise, CISOs—in collaboration with risk, compliance, and legal teams—can help ensure that innovation does not come at the expense of security, privacy, and data integrity.
Enabling AI across the enterprise presents risks at every stage—from strategy and design, to data collection and model training, to deployment and optimization. Further, AI introduces new attack vectors for cyber criminals, ranging from data poisoning to model evasion. It is essential for CISOs to anticipate these threats and fortify their organizations' cyber security measures, while maintaining compliance with both US and global regulations, including the recently ratified EU AI Act.
As organizations navigate this complex terrain, the role of the CISO is evolving beyond traditional security measures. It involves a strategic vision that integrates security from the ground up within the AI lifecycle, ensuring that all AI deployments are scrutinized for security implications before they go live. KPMG understands the pivotal role of CISOs in securing the AI-driven future and has numerous insights, tools, and services to help their organization responsibly seize its opportunities.
Whether coordinating security with data science teams or being elevated to Chief AI Officers, CISOs play a critical role in ensuring that their organizations evaluate, adopt, implement, and monitor trusted, responsible AI. By working with risk, compliance, and legal teams to develop and activate a process to quickly assess and control risks around generative AI models and data sets, CISOs can help enable the business with new AI capabilities.
CISOs and their teams anticipate and prepare for potential attacks that can include adversaries focusing on the vulnerabilities of AI, leveraging AI as an enabler of malicious schemes, third-party supplier vulnerabilities, model evasion, data poisoning, inference, and functional extraction, as well as traditional threats like ransomware and viruses.
CISOs should be at the table as teams determine how AI models operate, process data, and generate outputs. Their perspective should also be included in how the infrastructure provides computational resources and an environment for execution and underpins the model’s functionality, performance, reliability, flexibility, and scalability.
Through CISOs' leadership, organizations will be encouraged to consider critical privacy risk issues at each stage of AI adoption and the potential controls that can help mitigate those risks. CISOs can influence the use of privacy by design, development of data privacy policies and metrics, and compliance with international privacy regulations.
According to senior buyers of consulting services who participated in the Source study, Perceptions of Consulting in the US in 2024, KPMG ranked No. 1 for quality in AI advice and implementation services.
How CISOs can help kickstart GenAI projects
Businesses are keen to capture the benefits of generative AI (GenAI). Too often, however, security, privacy and other risk concerns are stalling progress. A robust AI risk review and readiness process can help.
Is my AI secure?
Understanding the cyber risks of artificial intelligence to your business
High energy expectations for renewables
New KPMG survey of US renewable energy executives finds industry momentum fueled by demand, innovation, and incentives.
KPMG generative AI survey report: Cybersecurity
An exclusive KPMG survey examines four areas where this remarkable technology shows great promise.
How the EU AI Act affects US-based companies
A guide for CISOs and other business leaders
Fake content is becoming a real problem
Widespread availability of sophisticated computing technology and AI enables virtually anyone to create highly realistic fake content.
What your AI Threat Matrix Says about your Organization
Ready, Set, Threat
Our AI security professionals tailor the approach to meet the requirements, platforms, and capabilities of different organizations to deliver an effective and accepted security strategy. Consideration of current and upcoming regulations and frameworks underpins all of our solutions.
KPMG AI Security Services is a core Trusted AI capability that helps organizations secure their most critical AI systems with a technology-enabled, risk-based approach. Powered by a proprietary solution created in the KPMG Studio under the auspices of our AI security spinoff Cranium, we help organizations develop and deliver effective security for AI systems and models.
Our AI security framework design provides security teams with a playbook to:
Trusted AI is our strategic framework and suite of services and solutions to help organizations embed trust in every step of the AI lifecycle. We combine deep industry experience and modern technical skills to help businesses harness the power of AI in a trusted manner—from strategy to design through to implementation and ongoing operations.