A practitioner’s guide to managing AI Security
Companies of all sizes and across industries are engaged in the Artificial Intelligence (AI) revolution. The race to integrate AI into internal operations and bring AI-based products and services to market is moving faster than almost anyone could have imagined. These technologies stand to help companies transform their businesses, achieve short-and long-term objectives at an historic pace, and drive deeper connections with customers, partners, and other stakeholders.
At the same time, the fervent excitement about AI has the potential to relegate critical security and assurance considerations to afterthoughts. Recognizing this disconnect –between AI innovation and AI Security –Global Resilience Federation (GRF) convened an AI Security& Trust working group and asked KPMG to facilitate in-depth discussions between AI and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations. KPMG was also asked to document the output of the working group sessions, which, ultimately, led to the creation of this guide.
The Practitioners’ Guide to Managing AI Security aims to provide insights and considerations that strengthen collaboration between data scientists and AI security teams across five tactical areas identified by the working group: Securing AI, Risk & Compliance, Policy & Governance, AI Bill of Materials, and Trust & Ethics.
Rapid advancements in Artificial Intelligence (AI) capabilities have instigated an equally quick paradigm shift in how organizations approach processes across business functions. From automation of simple tasks all the way to highly sophisticated models providing diagnostic recommendations based on medical imaging, AI has proven to be an exceptional tool for gaining and maintaining competitive advantage in tumultuous markets. However, even as it becomes evident that AI outputs will likely be critical to the future success and health of companies across industries, threat actors are taking notice of the new attack surface the technology creates.
The Global Resilience Federation (GRF) Summit on Security & Third-Party Risk being held October 11-12, 2023, in Austin, Texas, will illuminate how AI Security and AI innovation can be pursued in tandem.
The summit will include a joint keynote on Responsible AI from KPMG and Cranium leadership, as well as panels offering unique insights from both AI and cybersecurity leaders and practitioners on ways organizations are managing AI Security across sectors. This critical and engaging conference is designed for CIOs, CISOs and AI/ML experts, who will find value in engaging on effective ways to manage security and trust as they institute Artificial Intelligence and Machine Learning models in their organizations.
To learn more about the upcoming GRF Summit, click here.
Balancing ROI and Risk
Download PDF