AI and AI solutions are becoming increasingly intertwined with your organization. This means greater impact—but also more potential risks and stricter requirements from laws and regulations. You need to understand how your organization is performing in the area of AI and whether you're compliant. Are all your processes and systems in order? Have you addressed the risks of the tools you're using or developing? Are you being transparent and actively preventing issues like discrimination? It’s essential to test carefully and independently.
With AI Assurance, we provide an independent assessment of how your AI systems function and the measures you've taken to deploy AI responsibly. This can be done at two levels:
1. Governance
We examine how AI usage is embedded within your organization. We check, for example, whether roles and responsibilities are clearly assigned, whether you meet the requirements of the AI Act, and whether AI-related risks are being proactively managed. Our focus is on the interaction between technology, people, processes, and ways of working.
2. Individual
AI-systems
The focus can also be on individual AI systems. In that case, we look at measures taken to ensure the quality and compliance of the AI system. We can also assess whether the outcomes of an AI application align with the objectives and legal requirements. Independent assurance of an AI system can be an important step from a PoC (Proof of Concept) to a functioning, more widely implemented solution.
Human insight remains essential in AI
AI is more than just technology. What sets our approach apart is that we don’t only focus on models and data—we also consider the context surrounding the technology. What role does an AI system play in the business process? What safeguards are in place? How do humans and technology interact? Evaluating AI requires human insight and expertise across multiple domains. At KPMG, we’ve brought together all the relevant disciplines—data science, audit, IT, law and regulation, and ethics—into one integrated team. As early as 2019, we supported the City of Amsterdam in responsibly developing algorithms. Since then, we’ve continuously refined our approach based on the latest developments and insights.
AI assurance in three steps
In an AI assurance engagement, KPMG provides an independent assessment of specific AI systems or the way an organization manages AI. Together with our clients, we follow three steps tailored to the specific risks associated with AI:
Step 1:
Together with the client, we define the scope of the assessment. What aspects require assurance? Are we talking about the performance of an AI system, or about how biased it might be? What techniques and models have been used? What kind of technology are we dealing with? A relatively simple classification model that supports decision-making is very different from a complex LLM system that makes decisions autonomously. In short, we carefully define the scope of the engagement to arrive at a clear and achievable approach.
Step 2:
We define the evaluation framework needed to properly assess the case. We also determine which testing activities are necessary to support our conclusions. In doing so, we use our internationally developed Trusted AI Framework along with relevant standards and regulatory guidelines, such as the AI Act and ISO 42001. Our focus is strictly on the risks that truly matter, ensuring that the assessment places minimal burden on the systems and teams involved.
Step 3:
We carry out the audit and deliver our findings. Throughout the process, we take on the role of a professionally critical auditor—collaborating where needed and challenging where necessary. By combining our team’s specialized AI knowledge with deep expertise in audit standards, we ensure a thorough analysis. This allows us to go beyond a surface-level process audit and provide our clients with meaningful insights into the real risks within and around their AI systems.
Improving AI governance in your organization
To pass an AI audit, strong technical performance alone isn’t enough. And effective AI governance requires more than just formally assigning AI roles within the organization. Conducting an AI assurance assessment without proper preparation can lead to disappointment. For many organizations, achieving AI assurance is a growth journey. KPMG supports this journey by helping design and implement the right measures, and by conducting readiness assessments to determine whether compliance and risk management are sufficiently demonstrable to withstand an external audit. This enables organizations to work purposefully toward the desired level of maturity in AI governance.
Want to learn more? Let’s start the conversation.