Governing AI responsibly
Discover how risk professionals can develop an effective AI governance model

The transformative power and economic potential of generative AI cannot be denied. According to a recent KPMG survey of 200 business leaders in the U.S., generative AI was rated as the top emerging technology.[1] Survey respondents expected their organization to be impacted “very highly” in the next 12 to 18 months. Additionally, 80 percent believed that generative AI will disrupt their industry, and 93 percent are certain that this technology will provide value to their business.
Equally clear, however, are the risks involving generative AI. Even the best-run organizations can develop or adopt generative AI solutions that unintentionally present major risks in the following areas:
- Bias or inaccuracy: perpetuating and even amplifying societal biases present in the data used to train algorithms
- Errors and misinformation: inadvertently creating fake, distorted, or misleading content
- Privacy concerns with personal data: generating sensitive information, such as personally identifiable data or protected health information
- Cybersecurity: allowing the unintended introduction of vulnerabilities into infrastructures and applications through generated code or configurations
- Legal, copyright and intellectual property (IP) issues: creating ambiguities over the authorship, ownership, and responsibility of the data input and content generated by AI
- Liability: acting on wrong information or taking detrimental actions (such as a wrong diagnosis or the deletion of IP) that leave the organization open to legal liability
- Transparency: failing to understand input data or how generative AI makes decisions, sometimes because of “black box” technology provided by third-party suppliers
Developing an effective AI governance model: What should risk professionals know
Managing risk related to generative AI begins with developing a solid AI governance model designed to identify, manage, and respond to generative AI risks.
Based on our experience in developing generative AI solutions both for ourselves internally and our clients, an effective governance model should include essential directives and considerations such as the following:
- Develop a comprehensive governance model, inclusive of security and privacy
- Consider a single consistent governance model
- Develop and publicize a company wide AI charter
- Reimagine your AI intake process
- Re-evaluate your third-party risk exposure and contracts
- Align existing policies for AI
- Implement controls to manage risks across the entire AI lifecycle
- Engage a diverse and representative group of stakeholders
Read our new article to understand the potential benefits of integrating generative AI into business functions while maintaining stakeholder trust.
Dive into our thinking:
How KPMG can help
With every generative AI project, at KPMG, we strive to combine our deep industry experience, modern technical skills, leading solutions, and robust partner ecosystem to help business leaders harness the power of generative AI in a trusted manner—from initial strategy and design to ongoing activities and operations. We are actively involved in helping our clients manage risks associated with generative AI solutions such as performing rapid assessments of existing generative AI frameworks, maturity and benchmarking analysis, and implementing a generative AI governance process from intake to production.
Explore more

Get on board or get left behind
Visionary Internal Audit practices are charging ahead with advanced generative AI solutions.

2023 KPMG Generative AI Survey
An exclusive KPMG survey shows how top leaders are approaching Generative AI

The C-suite’s dilemma: Who’s in charge of AI risk?
Who owns AI? It’s become the existential problem- to- solve as adoption skyrockets against a backdrop of uncertainty. The solution starts here.
Meet our team

