• Leanne Allen, Partner |
3 min read

How do we get the future development and regulation of AI right?

2023 has been a year of booming interest and excitement around generative AI and large language models. The potential of the technology is huge. It’s not an overstatement to talk in terms of a Fifth Industrial Revolution – with implications for industry, governments and humanity at large.

While genAI could unlock massive productivity enhancements and drive powerful new capabilities, it also holds risks that need to be very carefully managed including deepfakes, misinformation & bias, hallucinations, IP theft and plagiarism to name but a few. In KPMG’s 2023 CEO Outlook Survey, global CEOs cited ethical challenges as their number one concern in relation to the implementation of generative AI. Regulatory efforts have been stepping up, including publication of the UK’s AI White Paper which espouses a ‘pro-innovation’ approach. So, how do we get the future development and regulation of AI right, for the economy, people, society and the planet?

With genAI now around a year old (in the public consciousness at least) and as a new year fast approaches, it was more timely and relevant than ever to come together at the Digital Ethics Summit 2023 to discuss where AI may head next – and how we can ensure that it works in the best interests of all. In a thought-provoking and wide-ranging day of discussion, a number of key points stood out for me, including:

I was heartened that much of this chimes with our thinking at KPMG. For example, our recent report Customer Experience Excellence explores ways in which organisations leading in customer experience are striking the right balance between human and machine as they integrate the AI colleague concept.

Recognising the key strategic importance of AI to our clients (as well as our own business and ways of working) we have already developed the concept of Trusted AI which has three key principles – AI needs to be values-driven, human-centric and trustworthy. This then extends to ten core pillars across the AI lifecycle, founded on attributes such as fairness, transparency, explainability, accountability and security.

For businesses specifically, we believe that AI needs a strong and clear governance approach across three foundational layers: the organisational layer (alignment of AI with organisational strategies and principles), the risk layer (ethics, regulatory compliance, risk management) and the technical layer (data and data-driven practices, technology systems, algorithms). Our aim is always to help businesses take a values-led approach and implement the AI colleague concept in a way that delivers value and builds trust. We’re doing the same internally at KPMG too – embedding AI to support our colleagues to work faster and smarter as well as utilise it within client-facing solutions and services.

As we move forward on the AI journey, we can expect a lot to happen in 2024. My hope is that we will see some significant regulatory frameworks appearing during the year that safeguard ethics and strike a proportional balance – enhancing protections for the public without stifling business creativity, innovation and productivity-enhancing use cases.

Related technology insights