AI is revolutionising the way we live and work, and it’s becoming increasingly pressing for individuals and organisations to understand its potential impact. 

As AI solutions rapidly develop and new applications become available for industry, a strong structure and framework for its application is needed to de-risk AI adoption. 

The AIIA and KPMG have identified what organisations need to do to ensure AI is developed, used and procured in a way that meets future regulatory and ethical expectations.

Throughout 2023, KPMG and the AIIA hosted a series of client events across Australia. Panels of AI experts were invited to discuss their insights, predictions and research into the future of AI use. 

Read Navigating AI: Analysis and guidance on the use and adoption of AI now or explore experts’ insights in Navigating AI: Event Insights.

Moving towards a definition of AI

Right now, there is no one authoritative, consistent and overarching definition of AI. Our guide seeks to provide a useful frame of reference to support AI developers, users and regulators.

We see AI as an umbrella term for interrelated techniques and technologies – including subfields like machine learning, natural language processing and robotics. Essentially, AI enables machines to perform tasks that would normally require human intelligence - including natural language, pattern recognition and decision making.

Making informed decisions with robust guidance

Australia has always been an early adopter of technology and as AI becomes ubiquitous across the economy, the launch of practical guidance will assist business and government in making informed decisions on AI adoption. 

The guide and checklist enables organisations and government to navigate all facets of AI with the full understanding of good governance. Including:

  • Establishing dedicated governance frameworks
  • Designating a responsible owner for AI governance in the C-suite 
  • Tracking and monitoring AI systems and use-cases
  • Responding to reports
  • Investing in training
  • Implementing routine auditing of algorithms

Key considerations for AI adoption and use

AI impacts on economy and jobs

AI has the potential to create new opportunities and transform industries – with a predicted $315 billion boost to the economy by 2028 – but it also raises concerns about impacts on jobs. AI will likely reshape cognitive work and change the demand for different skillsets, knowledge and experience.

Trust and public perception

As with any new technology, AI induces both excitement and anxiety. Can it be trusted? The guide discusses the role of transparency, accountability and self-regulation.

Government’s role in AI governance

How does government ensure AI benefits are realised and risks mitigated? Balancing overregulation with innovation and investment is key and our report explores the challenges.

A checklist for Australian businesses

A Checklist for Trustworthy AI provides a practical governance and implementation checklist for trustworthy AI covering:

  • Organisational Alignment
  • Ethics
  • Legal
  • Data
  • Algorithms
  • Security

How KPMG can help

We help organisations navigate the ever-changing AI landscape. Together, we work to develop clear quality and risk management practices - focusing on trust, transparency and accountability. We also offer targeted advice and solutions to help governments achieve regulatory reform to protect citizens and the broader community.

Navigating AI

Analysis and guidance on the use and adoption of AI.

Get in touch


Are there regulations in place to govern the ethical use of AI?

There isn't a single comprehensive set of laws solely dedicated to governing AI's ethical use. However, various global, national, and local frameworks, policies, and legislations exist as guidelines. For instance, existing Anti-Discrimination legislation could be applied if an AI tool were to produce biased outputs, offering a route to address such ethical concerns.

What ethical concerns are associated with AI?

AI raises ethical concerns, including biases and privacy issues. However, a critical aspect is the potential harm it poses to individuals. Biased algorithms can lead to discrimination and missed opportunities. Moreover, errors in AI decision-making, especially in critical areas like healthcare or justice, can significantly impact people's lives. Balancing innovation with ethical responsibility is crucial in AI development.

Can AI systems be biased?

Bias in AI can arise from multiple sources linked to data, including sampling, exclusion, and measurement bias. A combination of steps is required to address bias in AI systems. Those steps include identifying potential sources of bias, establishing clear guidelines and rules aimed at eliminating bias the procedures set up to do so, determining what constitutes representative data for a given use case, screening for bias before launch, and measuring and monitoring bias throughout the life of the model.