At a time when generative artificial intelligence (AI) seems to be moving at the speed of light, the importance of managing the related risks is shooting up the C-suite’s priority list just as quickly.
Risk and compliance experts are being challenged to keep up, but then that begs the question: Keep up with what? Formal laws around AI and generative AI (GenAI) technologies, or even informal guidelines, have not yet been established, so company leaders must proactively set appropriate protocols of their own to ensure safety, fairness, and ethical usage.
They must also understand where the regulatory trends are going as legislative scrutiny intensifies and AI systems in general become more user-friendly, accessible, and ubiquitous. In many ways, companies need to be as adaptable as GenAI itself by continuously evaluating inherent biases, transparency, governance, and data privacy—and then updating their approach accordingly.
For context, our recent 2023 KPMG Generative AI Survey gathered insights from 200 senior US business leaders about the transformational impact this emerging technology is already having on their businesses, as well as how they’re navigating adoption amid a regulatory fog that’s been slow to lift.
For now, most are moving forward with a mix of intent and caution: