• David Rowlands, Partner |
  • Laurent Gobbi, Partner |

Artificial intelligence (AI) has been described as the ‘internet moment of our time’. As the technology evolves, it increasingly has the power to transform our lives. 2023 was a year when the rapid advancement of new technology combined with increased global uncertainty drove many CEOs to start thinking seriously about embracing and embedding AI into future growth strategies. But change at such pace requires agility and innovative thinking. How people prepare for that change is crucial. Scalability and speed are dependent on those around us embracing and adopting the technology.

In KPMG’s 2023 CEO Outlook survey, 70 percent of surveyed business leaders told us they were making generative AI a top investment priority. Meanwhile, more than half (52 percent) told us they were expecting to see a return on their investment in three to five years, highlighting the confidence that boardrooms have in AI’s seemingly limitless potential.

The challenge for CEOs and other leaders is how to develop a truly strategic AI strategy that embraces the possibilities without ignoring the technical and ethical risks. In KPMG’s CEO Outlook, more than half of leaders (57 percent) had concerns about the ethical challenges created by implementing AI, while in KPMG’s global tech report, a similar number (55 percent) of organizations told us progress toward automation had been delayed because of concerns about how AI systems would make decisions.

As political, business and civic society leaders meet this year in Davos for the latest World Economic Forum Annual Meeting, AI is one of the main topics on the agenda. KPMG and Microsoft are investing in the development of AI, with a clear focus on ensuring the right infrastructure and strategies are in place to help companies embrace AI in a responsible, human-focused way. Both organizations have been collaborating on approaches to responsible AI governance for some time and share a common view on the importance of the development of responsible, trust-focused AI for the business community and wider society.

So, how do you make AI more ‘human-centric’ and what steps should you be taking to embed AI in your future growth strategy, to preserve trust and mitigate risk? Three specialist voices from KPMG and Microsoft offer their insights to help you on your AI journey.

David Rowlands, Global Head of AI, KPMG International

David Rowlands

I was appointed Global Head of AI at KPMG late last year as part of KPMG’s multibillion dollar global investment in the technology. It’s a top investment priority for KPMG and, as our CEO Outlook research highlighted, we’re not alone. An overwhelming majority of business leaders in major companies around the world have decided that now is the time to take AI seriously and embed it in future growth plans. And we’ve made a fast start, embracing the challenges of Trusted AI, of enabling our people and carefully managing our technology and data ecosystems.

The question of making AI more human-centric might appear quite vague at first. The biggest advocates for AI would argue we’re already there. It’s no exaggeration to suggest the tech has the potential to transform lives – from stripping away mundane day-to-day tasks in our jobs, to developing new innovative tools that assist modern medical science, or sustainability leaders tackling the climate crisis.

 It is genuinely exciting, but with anything new and relatively untested – there are potentially major pitfalls. Making AI more ‘human-centric’ is in my view about setting out a clear strategy that ensures the focus is on trust, transparency and safety and make certain that AI benefits us all, rather than adding new layers of ethical and financial risk in an era where we’re already facing deep uncertainty. 

KPMG has therefore launched its Trusted AI framework. It’s a set of clear principles of responsible and ethical AI transformation. It’s like a written constitution setting out clearly how we will use emerging AI technologies to enhance client engagements and the employee experience in a way that is truly responsible, trustworthy and safe. As an international network of member firms with hundreds of thousands of colleagues – most of whom are deep specialist knowledge workers – there can’t be anything more important.

For business leaders looking to embrace a human-centric AI future, I would urge them to look at governance first. For every person who’s excited about AI’s potential, there is another who is deeply concerned. Worried they may lose their job, or worried their company or personal data could be compromised. That’s why governance matters. It’s about setting out guidelines and rules before setting off on your journey – so that you can proceed safely and scale rapidly.

To start on that journey, be clear about what you want to achieve. Rather than simply adopting AI to keep up with your competitors, ask yourself what will success in the future look like? Where do you want your business to be in five years and how can AI be part of that? What will it feel like to be an employee in your future organization? Every citizen has a role to play in making AI work. Collaborate with your employees and upskill them.

The world is on the verge of something special with AI. Now is the moment for us all to look at how we make the technology work for humans. We can do that by being clear in our strategy, setting out guidelines that protect us and those around us, and taking everyone on the journey. 

Antony Cook, Corporate Vice President and Deputy General Counsel, Microsoft

Antony Cook

At Microsoft, we’re focusing on continuing to integrate AI into all our products responsibly, creating a foundation our customers can build upon as they leverage our AI technology. Our AI development and use is grounded in six principles: (1) fairness - all systems should treat all people fairly; (2) reliability and safety; (3) privacy and security; (4) inclusiveness - all systems should empower everyone and engage people; (5) transparency - all AI systems should be understandable; and (6) accountability - people should be accountable for AI systems. We have created tools and systems to ensure these principles are put into place in each and every product or system we develop. And we’ve created resources for our customers, leveraging our learnings, to help them ensure their use and development on our AI products is done responsibly and in alignment with all of these core principles.

When it comes to ensuring that AI is adopted and used responsibly, there are three key areas that I consider essential:

1. Leadership must be committed and involved: For responsible AI to be meaningful, it has to start at the top. At Microsoft, we have created a Responsible AI Council to oversee our efforts across the company. The Council is chaired by Microsoft’s Vice Chair and President, Brad Smith, and our Chief Technology Officer, Kevin Scott, who sets the company’s technology vision and oversees our Microsoft Research division. This joint leadership is core to our approach, sending a clear signal that Microsoft is committed not just to leadership in AI, but to leadership in responsible AI. The Responsible AI Council meets regularly, and brings together representatives of our core research, policy, and engineering teams dedicated to responsible AI. As customers consider how to structure their own responsible AI programs and governance, it’s imperative to ensure that senior leaders across multiple areas of the company be involved and directly engaged. 

2. Build inclusive governance models and actionable guidelines: Each company should create a responsible AI governance model that is inclusive, bringing together representatives from engineering, research, and their policy teams, to develop and implement the governance model and the company’s guidelines around responsible AI. We have senior leaders tasked with spearheading responsible AI within each core business group at Microsoft, and we continually train and grow a large network of responsible AI “champions” to allow us to have broader representation across the globe. Last year, Microsoft publicly released the second version of our Responsible AI Standard, which is our internal playbook for how to build AI systems responsibly. We encourage companies to review this document and to take from it any of our practices that they find beneficial.

3. Invest in and empower your people: Standards and plans are great but will not be meaningful without training your employees to support the rollout of responsible AI across the company. We have invested significantly in responsible AI over the years, with new engineering systems, research-led incubations, and, of course, people. We now have nearly 350 people working on responsible AI, with just over a third of those dedicated full time; the remainder have responsible AI responsibilities as a core part of their jobs. Our community members have positions in policy, engineering, research, sales, legal, and other core functions, touching all aspects of our business. 

Last summer, we launched our AI Customer Commitments, building on the resources we had already made available to our customers. We committed to continuing to share what we are learning about developing and deploying AI responsibly and to assist companies in learning how to do the same. Through our AI Assurance Program, we have offered to help customers ensure that the AI applications they deploy on our platforms meet the legal and regulatory requirements for responsible AI, including helping with regulator engagement and advocacy and with their risk framework implementation. And, finally, we have launched and will continue to grow our Responsible AI partner program, leveraging partners like KPMG to assist our mutual customers in deploying their own responsible AI systems. 

There is tremendous potential in AI, and creating and using it responsibly will be key for us and our customers across the globe.

Laurent Gobbi, Global Trusted AI & Tech Risk Leader, KPMG International

laurent-gobbi

KPMG’s Trusted AI framework was purposely designed to not only help our people, but to empower member firms’ clients. Trust is an integral part of who we are, and being able to harness the potential of AI comes down to how much trust we are willing to place in the use of this technology and our willingness to adapt.   

The top three challenges KPMG professionals are hearing from clients are:

1.       How can AI improve my business?
2.       How can we ensure we are using AI responsibly?
3.       What does effective governance look like?

AI is rapidly evolving. Over the course of the last 12 months, leaders and organizations have been faced with new challenges, such as the increase of computational power and the pace of innovation in the software industry. However, AI technologies are also bringing about new and fascinating capabilities. For example, you now have the ability to store the entirety of the internets’ knowledge in a single Large Language Model (LLM), which allows you to have an everyday conversation with this model, as though you were speaking to another human. An incredibly knowledgeable human – who can answer nearly any question with a high level of accuracy. These LLMs are also able to generate images, video and sounds in the same way they generate text and have the ability to be programmed to different levels of creativity.

Knowledge isn’t static – there is an element of reasoning and AI is able to articulate answers to problems. Generative AI has the capability to manage unstructured data in a similar way to how technology solutions currently deal with structured data. From a tech perspective, that makes it exciting as it offers so many opportunities for the future, but it also unlocks many grey areas and often unexplainable challenges.

For business leaders, the challenge is balancing the enthusiasm for something new that has so much potential with the risks that we’re increasingly becoming aware of. We know about the ethical and big philosophical societal debates around AI, but there are other more practical day-to-day risks. For example, when using tools such as LLM, the more creativity you ask of it, the less accuracy you receive – even occasionally producing unexpected results, sometimes referred to as ‘hallucinations’.

This black box effect is not new. It is quite common to have a complex technology platform generating errors or discrepancies because of multiple technical layers. The reasons are numerous but, in many cases, it comes from the way the system has been designed and maintained, either because the sponsor or users are not properly involved or because the developer does not apply the right processes or testing steps.

Regulation is also a factor. On this point, I would advise leaders that they need to have a sense of personal responsibility. Regulators are struggling to keep pace with the speed of change and regulation tends to follow much later.

If you consider an AI system aiming at providing a decision, what is the system intended for? Who owns the design of the system? Who is impacted by the decision? This is the point at which we need to bring the "human into the loop". As with any technology system, you need to have strong involvement of people at the design, testing and production stages. This is needed to ensure that your solution is stable.

My advice to CEOs and other business leaders is to embrace AI now, but take a strategic, democratic approach. Embed ethical thinking into how you drive AI integration forward. Constantly challenge yourself on the potential harmful use of AI, rather than simply focusing on the benefits. Think of it in the context of ESG. What impact will your plans have on the environment surrounding you? It may not seem obvious at first, but the rapid evolvement of and intensive use of technology is energy intensive, so consider how AI might negatively alter your efforts to play a supportive, transparent role in creating a more sustainable planet. What will the impact be on society? Have you truly factored in what AI means for your workforce? Ensure that everyone in your organization is part of the journey and benefits, rather than suffers from AI’s growth. Finally, governance is arguably the most important thread that weaves everything together. To be clear, this isn’t about creating a golden rulebook and adding layers of red tape that could slow your journey down. Instead, think of it as a clear route map. Set boundaries and ensure that any new AI plans can be rigorously tested against ethical and people-related questions.

AI has great potential to become a golden asset for humans, but with poor planning and strategy it could slow business and expose us to new risks. Embrace AI and bring everyone on your journey to truly unlock the potential for technological transformation.