Who owns AI? It’s become the existential problem to solve as adoption skyrockets against a backdrop of uncertainty. The solution starts here.
Every successful company understands the importance of risk management—from cybersecurity and data privacy to regulatory compliance. But one emerging threat is not getting the attention it deserves: the risk associated with artificial intelligence (AI).
Call it the paradox of progress. AI solutions are booming because, as an open-source technology, it’s owned by no one. But leveraging it to drive insights, automation, and innovation within an organization—while limiting risk at the same time—requires clear ownership and accountability.
This is a massive missing piece and emerging threat, as we detail in our 2023 KPMG U.S. AI Risk Survey Report. At most organizations, there isn’t yet a role dedicated to AI risk management. But these risks, and potential new regulatory requirements meant to help mitigate them, require a thoughtful new framework to put a tight ring around what will otherwise become an AI circus—and starting today.
On top of that, the vast majority of the leaders we surveyed expect mandatory AI audits within a few years. And while there’s still much excitement and enthusiasm from company leaders about AI and its vast potential on many fronts, there’s also clear concern about the many unknowns ahead.
To better understand how businesses are approaching AI risk, KPMG asked 140 executives from various industries for their views about the threats associated with their AI initiatives. The big three, they agreed, are data integrity, statistical validity, and model accuracy.
As such, it’s a bit surprising to discover that relatively few C-suite executives have a seat at the risk mitigation table:
Our survey also found that while many C-suiters are actively involved in providing direction on goals and analytics, they are mostly delegating the equally important rubber-meets-the-road parts: implementation, refinement, and risk review. This suggests that organizations recognize AI-related risks, but may not be bringing enough executive firepower and gravitas to the table to fully address them.
There’s also a potentially concerning gap in understanding about exactly how AI models are defined. For example, 82 percent of the survey respondents said their organization had a clear definition of AI and the related predictive models; and overall concerns about transparency on AI ranked a distant fourth. But with most companies using at least some third-party data and analytics “black box” solutions—which, by definition, lack that transparency—where is that confidence coming from?
The lack of AI ownership is exacerbated by emerging technology approaches such as data lakes, which conveniently not only centralize data for AI access and insight mining but also risk disconnecting the data from its source, leading to loss of ownership and domain-specific knowledge. Respondents in our survey said data integrity is their top concern, raising questions about identifying intentional errors introduced by malicious actors at the data’s source.
Accelerating government oversight is making leaders sweat as well: 73 percent of respondents reported some level of regulatory oversight over their AI models. In addition, 84 percent believe independent AI model audits will become a requirement within the next one to four years. A patchwork of government agencies are already circling AI model audits in the United States, and the EU is proposing regulations to govern AI model usage with potential fines for noncompliance.
However, most organizations lack the expertise to conduct these audits internally, with only 19 percent saying they have the necessary skills to do that today. In other words, AI adoption and maturity are outpacing organizations’ ability to assess and manage associated risks effectively.
84%
of respondents believe independent AI model audits will become a requirement within the next 1-4 years.
Responsible AI: A new way to manage risk
How can you address these threats? One answer is responsible AI, an approach to design, build, and deploy AI systems in a safe, trustworthy, and ethical manner. To establish a responsible AI platform, organizations can start with eight guiding principles:
1
2
3
4
5
6
7
8
Where will AI/GenAI regulations go?
Demonstrating 'trusted AI systems'
Responsible AI and the challenge of AI risk
Insights from the 2023 KPMG US AI Risk Survey Report
Ready for our lightbulb moment?
Why generative AI is an extraordinary new power source
Turn insight into opportunity with unique perspectives and actionable insights addressing the burning issues atop the C-suite agenda. Delivered monthly.