Artificial Intelligence (AI) is transforming the way work is done and how services are delivered. Organisations are leveraging the remarkable power of AI to improve data-based predictions, optimise products and services, augment innovation, enhance productivity and efficiency and lower costs. However, AI adoption also poses risks and challenges, raising concerns about whether AI use today is truly trustworthy.

Realising the potential benefits of AI, and a return on investment, requires a clear and sustained focus on maintaining the public’s trust. To drive adoption, people need to be confident that AI is being developed and used in a responsible and trustworthy manner.

In collaboration with the University of Queensland, KPMG Australia led the world-first deep dive into trust and global attitudes towards AI across 17 countries. Trust in artificial intelligence: A global study 2023 provides broad-ranging global insights into the drivers of trust, the perceived risks and benefits of AI use, community expectations of governance of AI and who is trusted to develop, use and govern AI.

This report, Trust in artificial intelligence: 2023 global study on the shifting public perceptions of AI, highlights key findings from the global study and provides individual country snapshots which should be instructive to those involved in leading, creating or governing AI systems. Importantly, four critical pathways are highlighted for policymakers, standards setters, governments, businesses and NGOs to consider as they navigate the trust challenges in AI development and deployment.

Explore the global key findings on the shifting public perceptions of AI

Most people are wary about trusting AI systems and have low or moderate acceptance of AI. Trust and acceptance depend on the AI application.

Three in five (61 percent) are wary about trusting AI systems.

67 percent report low to moderate acceptance of AI.

• AI use in human resources is the least trusted and accepted, while AI use in healthcare is the most trusted and accepted.

People in emerging economies are more trusting, accepting and positive about AI than people in other countries.

People recognize AI’s many benefits, but only half believe the benefits outweigh the risks. People perceive AI risks in a similar way across countries, with cybersecurity rated as the top risk globally.

85 percent believe AI results in a range of benefits.

Yet only half of respondents believe the benefits of AI outweigh the risks.

Top concern is cybersecurity risk at 84 percent.

People are most confident in universities and defense organizations to develop, use and govern AI and they are least confident in government and commercial organizations.

76 to 82 percent confidence in national universities, research institutions and defense organizations to develop, use and govern AI in the best interest of the public.

One-third of respondents lack confidence in government and commercial organizations to develop, use and govern AI.

There is strong global endorsement for principles that define trustworthy AI. Trust is contingent on assuring such principles are in place. People expect AI to be regulated with external, independent oversight — and they view current regulations and safeguards as inadequate.

97 percent strongly endorse the principles for trustworthy AI.

Three in four would be more willing to trust an AI system when assurance mechanisms are in place.

71 percent expect AI to be regulated.

Most people are comfortable using AI to augment work and inform managerial decision-making but want humans to retain control.

About half are willing to trust AI at work.

Most people are uncomfortable with or unsure about AI use for HR and people management.

Two in Five believe AI will replace jobs in their area of work.

Younger people, the university educated and managers are more trusting of AI at work.

People want to learn more about AI but currently have a low understanding. Those who understand AI better are more likely to trust it and perceive greater benefits.

Half of respondents feel they don’t understand AI or when and how it’s used.

45 percent don’t know AI is used in social media.

85 percent want to know more about AI.

About the study

This survey is the first deep-dive global examination of the public’s trust and attitudes towards AI use, and their expectations of its management and governance.

KPMG Australia worked with The University of Queensland to survey over 17,000 people from 17 countries leading in AI activity and readiness within each region: Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel, Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom, and the United States of America.

University of Queensland Researchers

Professor Nicole Gillespie, Dr Steve Lockey, Dr Caitlin Curtis and Dr Javad Pool.
The University of Queensland team led the design, conduct, analysis and reporting of this research.

KPMG Australia

James Mabbott (Partner), Rita Fentener van Vlissingen (Associate Director, KPMG), Jessica Wyndham (Associate Director), and Richard Boele (Partner).


We are grateful for the insightful input, expertise and feedback on this research provided by Dr Ali Akbari, Dr Ian Opperman, Rossana Bianchi, Professor Shazia Sadiq, Mike Richmond, and Dr Morteza Namvar, and members of the Trust, Ethics and Governance Alliance at The University of Queensland, particularly Dr Natalie Smith, Associate Professor Martin Edwards, Dr Shannon Colville and Alex Macdade.

Get in touch

Connect with us