A global study on trust in Artificial Intelligence (AI) reveals more than half of people globally are unwilling to trust AI, reflecting an underlying tension between its obvious benefits and perceived risks.
Malta is proactively addressing AI adoption and its applicable use through a multi-faceted approach that includes strategic planning, regulatory frameworks, and initiatives aimed at fostering trust. Malta’s national AI strategy, "Strategy and Vision for Artificial Intelligence in Malta 2030," published in 2019 -which is currently being realigned in terms of strategy and vision- outlines a vision to become a leader in the AI field, emphasising investment, innovation, and adoption across both the public and private sectors, underpinned by ethical considerations and the development of a trustworthy AI ecosystem.
Complementing this, Malta is actively implementing the EU AI Act through the technology regulator, by taking a leading role in ensuring the safe, trustworthy, and human-centric use of AI, including the establishment of national AI certification and regulatory frameworks to build confidence in ethically aligned AI systems.
Key Findings
- The intelligent age has arrived – 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits.
- Yet, trust, which is central to AI acceptance, remains a critical challenge. Only 46% of people globally are willing to trust AI systems which correlates with low levels of AI literacy - only two in five (39%) report some form of AI training and only 40% say their workplace has a policy or guidance on generative AI use.
- There is a public mandate for national and international AI regulation with only 43% of respondents believing current regulations are adequate.
- Data suggests that just under half of organisations may be using AI without adequate support and governance.
The Trust, attitudes and use of Artificial Intelligence: A global study 2025 led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG, is the most comprehensive global study into the public’s trust, use and attitudes towards AI.
The study surveyed over 48,000 people across 47 countries between November 2024 and January 2025.
It found that although 66% of people are already intentionally using AI with some regularity, less than half of global respondents are willing to trust it (46%).
When compared to a previous study conducted prior to the release of ChatGPT in 2022, it reveals that people have become less trusting and more worried about AI as adoption has increased.
Individuals and organisations are more likely to trust AI systems when they understand how AI works, yet the study finds that only two in five (39%) report some form of AI training. In line with these low levels of AI training, almost half (48%) report limited knowledge about AI, indicating that they do not feel they understand AI nor when or how it is used.
“The public’s trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption,” says Professor Gillespie.
“Given the transformative effects of AI on society, work, education, and the economy - bringing the public voice into the conversation has never been more critical.”
AI at work and in education
The age of working with AI is here, with three in five (58%) employees intentionally using AI – and a third (31%) using it weekly or daily.
This high use is delivering a range of benefits with most employees reporting increased efficiency, access to information and innovation. Almost half of those surveyed report that AI has increased revenue-generating activity.
However, only 60% of organisations provide responsible AI training and only 34% report an organisational policy or guidance on the use of generative AI tools.
The use of AI at work is creating complex risks for organisations, and a ‘governance gap’ is emerging. The study reveals almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.
Complacent use could be due to governance of responsible AI trailing behind. In advanced economies, just over half of employees (55%) report that their organisation has mechanisms in place to support AI adoption and responsible use, including a strategy and culture conducive to responsible AI adoption, adequate employee training, and governance processes.
“According to the study, many users rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%), and what makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own", Keith continues to explain.
This lack of AI governance is also seen in educational institutions, only half of which have policies, resources and training for responsible AI use in place.
AI in society
73% of people report personally experiencing or observing benefits of AI, including reduced time spent on mundane tasks, enhanced personalisation, reduced costs and improved accessibility.
However, four in five are also concerned about risks, and two in five report experiencing negative impacts of AI. These range from a loss of human interaction and cybersecurity risks through to the proliferation of misinformation and disinformation, inaccurate outcomes, and deskilling.
70% believe AI regulation is required, yet only 43% believe existing laws and regulation are adequate.
There is a clear public demand for international law and regulation and for industry to partner with governments to mitigate these risks. 87% of respondents also want stronger laws to combat AI-generated misinformation and expect media and social media companies to implement stronger fact-checking processes.
AI is surely the greatest technological innovation of our generation. Given its rapid advancement, it is imperative that AI systems are established on a foundation of a good governance which will help to drive trust. Users want assurance regarding the AI systems they interact with. Therefore, the complete potential of AI can only be realised if the public has confidence in the systems making decisions or assisting in them. For this reason, KPMG developed the Trusted AI approach to make the concept of trust both tangible and quantifiable for our clients.
For interviews and media opportunities, please contact:
About this report
The University of Melbourne research team, led by Professor Nicole Gillespie and Dr Steve Lockey, independently designed and conducted the survey, data collection, analysis, and reporting of this research.
This study is the fourth in a research program examining public trust in AI. The first focused on Australians’ trust in AI in 2020, the second expanded to study trust in five countries in 2021, and the third surveyed people in 17 countries in 2022.
This research was supported by the Chair in Trust research partnership between the University of Melbourne and KPMG Australia, with funding from KPMG International, KPMG Australia, and the University of Melbourne.
Nicole Gillespie is an internationally recognised scholar whose research focuses on trust, management and emerging technologies. She has been leading a program of research examining trust and public attitudes towards AI, and achieving trustworthy AI, since 2020. She holds the Chair in Trust and is Professor of Management at Melbourne Business School and the Faculty of Business and Economics at the University of Melbourne. Nicole is also an International Research Fellow at the Centre for Reputation at Oxford University, Honorary Professor at the University of Queensland, and a Fellow of the Academy of Social Sciences in Australia and the Australian and New Zealand School of Government.
David Rowlands is the Global Head of AI at KPMG. In his role, David is tasked with implementing KPMG's AI strategy, applying emerging AI technologies to enhance client engagements and the employee experience in a way that is Trusted, value enhancing, and human-centric. This includes innovating new client solutions and ways of working, equipping all 275,000 colleagues with the latest AI capabilities, developing robust and trusted AI capabilities for our clients across Audit, Tax & Legal and Advisory, and building a global AI immersion and skills programme to enable colleagues to leverage the benefits of AI. Prior to being Global Head of AI, David was the Head of Consulting for KPMG UK from 2016 to 2023. He has a substantive track record advising on and delivering high-profile, large scale business transformation programmes for FTSE 100 companies, across several industries including FMCG, Financial, Utilities and Government.
KPMG
KPMG is a global organisation of independent professional services firms providing Audit, Tax and Advisory services. KPMG is the brand under which the member firms of KPMG International Limited (“KPMG International”) operate and provide professional services. “KPMG” is used to refer to individual member firms within the KPMG organisation or to one or more member firms collectively.
KPMG firms operate in 142 countries and territories with more than 275,000 partners and employees working in member firms around the world. Each KPMG firm is a legally distinct and separate entity and describes itself as such. Each KPMG member firm is responsible for its own obligations and liabilities.
KPMG International Limited is a private English company limited by guarantee. KPMG International Limited and its related entities do not provide services to clients.
For more detail about our structure, please visit kpmg.com/governance.