Global study reveals trust of AI remains a critical challenge reflecting tension between benefits and risks
The intelligent age has arrived; however, trust of AI remains a critical challenge
KPMG’s study underlines tension between its obvious benefits and perceived risks
EN | TH
Key findings:
- The intelligent age has arrived – 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits
- Yet, Trust remains a critical challenge: only 46% of people globally are willing to trust AI systems
- There is a public mandate for national and international AI regulation with 70% believing regulation is needed
- Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).
Bangkok, 23 July 2025 – KPMG International released a global study on trust in Artificial Intelligence (AI) that reveals more than half of people globally are unwilling to trust AI, reflecting an underlying tension between its obvious benefits and perceived risks.
The Trust, attitudes and use of Artificial Intelligence: A global study 2025 led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG, is the most comprehensive global study into the public’s trust, use and attitudes towards AI.
The study surveyed over 48,000 people across 47 countries between November 2024 and January 2025.
It found that although 66% of people are already intentionally using AI with some regularity, more than half of global respondents (58%) view it as untrustworthy.
With people in emerging economies reporting higher AI adoption, trust, and perceived benefits, it’s clear that the technology’s impact is not evenly distributed—but its potential is universal. It is without doubt the greatest technology innovation of a generation and it is crucial that AI is grounded in trust given the fast pace at which it continues to advance, if its potential is to be realized at the same rate.
Organizations have a clear role to play when it comes to ensuring that AI is both trustworthy and trusted. People want assurance over the AI systems they use which means AI’s potential can only be fully realized if people trust the systems making decisions or assisting in them. This is why KPMG developed our Trusted AI approach, to make trust not only tangible but measurable for clients.
Christopher Saunders
Head of Consulting
KPMG in Thailand
Head of Consulting
KPMG in Thailand
AI at work
The age of working with AI is here, with three in five (58%) employees intentionally using AI – and a third (31%) using it weekly or daily.
This high use is delivering a range of benefits with most employees reporting increased efficiency, access to information and innovation. Almost half (48%) report AI has increased revenue-generating activity.
However, the use of AI at work is also creating complex risks for organizations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.
Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).
The findings show that AI is bringing benefits to employee experience and with these enhanced opportunities to generate revenue, but at the same time introducing risks that business leaders need to respond to. Establishing a culture that encourages responsible and transparent use of AI and a governance framework including comprehensive programs to educate and empower personnel for responsible AI use is key.
Itthipat Limmaneerak
Consulting Partner
Data, Al & Analytics
KPMG in Thailand
Consulting Partner
Data, Al & Analytics
KPMG in Thailand
AI in society
Four in five people report personally experiencing or observing benefits of AI, including reduced time spent on mundane tasks, enhanced personalization, reduced costs and improved accessibility.
However, four in five are also concerned about risks, and two in five report experiencing negative impacts of AI. These range from a loss of human interaction and cybersecurity risks through to the proliferation of misinformation and disinformation.
In Thailand, the regulatory bodies have proposed two draft laws related to Artificial Intelligence (AI): The Draft Royal Decree on Business Operations that Use Artificial Intelligence Systems and The Draft Act on the Promotion and Support of AI Innovations in Thailand, along with draft principles of AI Regulations to align the Draft Decree and Draft Act with advancements in artificial intelligence development, relevant laws and practices in other countries, and the domestic context.
These regulations and related guidance seek to promote the core principles for ethical use of AI including Transparency and Explainability, Fairness and Equity, Security and Safety, Robustness and Reliability, Human-centricity, Privacy and Data Governance and Accountability and Integrity. The public hearing process on the draft principles of AI Regulations concluded in June 2025. Currently, the feedback from that process is being assessed with a view to including amendments to the Draft Decree and Draft Act before enactment. Whilst the timeline for the announcement and enforcement of the AI regulations remains uncertain, we should anticipate a future in where the ethical use of AI is subject to regulation, which in turn should promote public trust.
Threenuch Bunruangthaworn
Legal Director
KPMG in Thailand
Legal Director
KPMG in Thailand
About this report
The University of Melbourne research team, led by Professor Nicole Gillespie and Dr Steve Lockey, independently designed and conducted the survey, data collection, analysis, and reporting of this research.
This study is the fourth in a research program examining public trust in AI. The first focused on Australians’ trust in AI in 2020, the second expanded to study trust in five countries in 2021, and the third surveyed people in 17 countries in 2022.
This research was supported by the Chair in Trust research partnership between the University of Melbourne and KPMG Australia, with funding from KPMG International, KPMG Australia, and the University of Melbourne.
About KPMG International
KPMG is a global organization of independent professional services firms providing Audit, Tax and Advisory services. KPMG is the brand under which the member firms of KPMG International Limited (“KPMG International”) operate and provide professional services. “KPMG” is used to refer to individual member firms within the KPMG organization or to one or more member firms collectively.
KPMG firms operate in 142 countries and territories with more than 275,000 partners and employees working in member firms around the world. Each KPMG firm is a legally distinct and separate entity and describes itself as such. Each KPMG member firm is responsible for its own obligations and liabilities.
KPMG International Limited is a private English company limited by guarantee. KPMG International Limited and its related entities do not provide services to clients.
For more detail about our structure, please visit kpmg.com/governance.
About KPMG in Thailand
KPMG in Thailand, with more than 2,500 professionals offering Audit and Assurance, Legal, Tax, and Advisory services, is a member firm of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee.
For media inquiries, please contact
Sasiphim Koodisthalert
Email: sasiphim@kpmg.co.th