A new global study on trust in Artificial Intelligence (AI) released today has found half (50%) of Australians use AI regularly, but only 36% are willing to trust it, with 78% concerned about negative outcomes.

The Trust, attitudes and use of Artificial Intelligence: A global study 2025 led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG, is the most comprehensive global study into the public’s trust, use and attitudes towards AI.

The study surveyed 48,340 people across 47 countries (including Australia) between November 2024 and January 2025, using representative sampling.

Australians are less trusting and positive about AI than most countries

As well as being wary of AI, Australia ranks among the lowest globally on acceptance, excitement and optimism about it, alongside New Zealand and the Netherlands.

Only 30% of Australians believe the benefits of AI outweigh the risks, the lowest ranking of any country. Australians also trail behind other countries in realising the benefits of AI (55% vs 73% globally report experiencing benefits).

“The public’s trust of AI technologies and their safe and secure use is central to acceptance and adoption,” Professor Gillespie says. “Yet our research reveals that 78% of Australians are concerned about a range of negative outcomes from the use of AI systems, and 37% have personally experienced or observed negative outcomes ranging from inaccuracy, misinformation and manipulation, deskilling, and loss of privacy or IP.”

Australia is lagging in AI literacy

Australians have amongst the lowest levels of AI training and education, with just 24% having undertaken AI-related training or education compared to 39% globally.

Over 60% report low knowledge of AI (48% globally), and under half (48%) believe they have the skills to use AI tools effectively (60% globally). Australians also rank lowest globally in their interest in learning more about AI.

“AI literacy consistently emerges in our research as a cross-cutting enabler: it is associated with greater use, trust, acceptance, and critical engagement with AI output, and more benefits from AI use, including better performance in the workplace,” Professor Gillespie says. “An important foundation to building trust and unlocking the benefits of AI is developing literacy through accessible training, workplace support, and public education.”

AI use is producing benefits at work, but also risks

Two thirds (65%) of Australians report their employer uses AI, and 49% of employees say they are intentionally using AI on a regular basis. Employees are reporting increased efficiency, effectiveness, access to information and innovation.

However, the use of AI at work is also creating complex risks for organisations. Almost half of employees (48%) admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT. 

Many rely on AI output without evaluating accuracy (57%) and are making mistakes in their work due to AI (59%). A lot of employees also admit to hiding their use of AI at work and presenting AI-generated work as their own.

“Psychological safety around the use of AI in work is critical. People need to feel comfortable to openly share and experiment with how they are using AI in their work and learn from others for greater transparency and accountability," Professor Gillespie says.

Some inappropriate use may stem from a lack of clear organisational guidance.  While generative AI tools are the most widely used by Australian employees (71%), only 30% say their organisation has a policy on generative AI use.

KPMG Australia Chief Digital Officer John Munnelly said the combination of rapid adoption, low AI literacy and weak governance is creating a complex risk environment.

“Many organisations are rapidly deploying AI without proper consideration being given to the structures needed to ensure transparency, accountability and ethical oversight – all of which are essential ingredients for trust,” he says.

Calls for greater governance

The research found strong public support for AI regulation with 77% of Australians agreeing regulation is necessary.

Australians expect international laws and regulation (76%), as well as oversight by the government and existing regulators (80%) and co-regulation with industry (77%). However, only 30% believe current laws, regulation and safeguards are adequate to make AI use safe.

83% of Australians say they would be more willing to trust AI systems when assurances are in place, such as adherence to international AI standards, responsible AI governance practices, and monitoring system accuracy.

“The research reveals a tension where people are experiencing benefits but also potentially negative impacts from AI. This is fuelling a public mandate for stronger regulation and governance and a growing need for reassurance that AI systems are being used in a safe, secure and responsible way,” Professor Gillespie says.

"There is a striking opportunity for industry and government to foster trust in AI by building on the existing Voluntary AI Safety Standards and ensuring Australian safeguards expand in line with emerging international laws and regulations. Organisations also need to invest in the training and development of their people,” Mr Munnelly says.

“At KPMG Australia we invested early in our own governance, becoming the first organisation in the world to obtain ISO42001 (AI) certification by BSI, achieving its standards designed to empower safe management of AI and build trust to enable its secure and responsible use.”

For further information

Alex Bernhardt
Media Relations
KPMG Australia
T: 0478 469 999
 E: abernhardt1@kpmg.com.au

Alison Bottcher    
Communications Manager  
Melbourne Business School
T: 0405 812 602
E: a.bottcher@mbs.edu

About this report

The University of Melbourne research team, led by Professor Nicole Gillespie and Dr Steve Lockey, independently designed and conducted the survey, data collection, analysis, and reporting of this research.

This study is the fourth in a research program examining public trust in AI. The first focused on Australians’ trust in AI in 2020, the second expanded to study trust in five countries in 2021, and the third surveyed people in 17 countries in 2022.

This research was supported by the Chair in Trust research partnership between the University of Melbourne and KPMG Australia, with funding from KPMG International, KPMG Australia, and the University of Melbourne. 

About Professor Nicole Gillespie

Nicole Gillespie is an internationally recognised scholar whose research focuses on trust, management and emerging technologies. She has been leading a program of research examining trust and public attitudes towards AI, and achieving trustworthy AI, since 2020. She holds the Chair in Trust and is Professor of Management at Melbourne Business School and the Faculty of Business and Economics at the University of Melbourne. Nicole is also an International Research Fellow at the Centre for Reputation at Oxford University, Honorary Professor at the University of Queensland, and a Fellow of the Academy of Social Sciences in Australia and the Australian and New Zealand School of Government.

Melbourne Business School

Melbourne Business School is where the world's brightest minds come to develop the skills and attitude, they need to be the leaders of tomorrow. We're the University of Melbourne's graduate school in business and economics, jointly owned by the business community and the University. Our leaders report to an independent board of directors that includes top CEOs and academics from around the globe. With accreditation from AACSB and EFMD (EQUIS), our degree programs and short courses are ranked among the best in the world.

For more details, visit mbs.edu/about-us