Artificial Intelligence (AI) has moved well beyond experimentation and is now actively reshaping how leading organizations identify, assess, and manage risk. Across sectors, AI is being deployed to improve efficiency, enhance decision-making, and support more forward-looking risk management in organizations that adopted the technology early. In contrast to these leading firms, Belgian data shows that a significant number of organizations use little to no AI support in their risk management processes, most often citing data quality as the biggest hurdle.
At the same time, the growing use of AI introduces new risks of its own for firms that adopt the technology, ranging from data leakage and model errors to governance and accountability challenges. Before these risks can be effectively governed and controlled, they need to be fully understood.
Understanding AI in risk: the three types of AI
Ever since the release of GPT-3.5 by OpenAI in 2022, the topic of AI has known explosive growth in virtually every sector. Thanks to the language-to-language nature of Large Language Models (LLMs) like ChatGPT, AI became an available tool to virtually everyone, instead of exclusively to individuals specialized in software engineering and neural networks.
While LLMs are the most prominent topic thanks to this widespread availability, it is important to recognize that AI, even in a business context, can take on various forms. While a clear delineation between these forms cannot be made due to elements of each often being present in a single AI application, they all carry different implications for their added value in the field of Risk Management on one hand, and for their required governance and oversight on the other hand.
- Traditional machine learning (ML) has existed for many decades, and focuses on learning patterns from structured, historical data to predict outcomes or classify cases. If sufficient training data for these models is available, these can be trained to perform or support various single activities in a business context.
- LLMs have existed for around a decade and have known widespread deployment in the last three years: they work with unstructured information and generate human-like outputs, enabling use cases such as drafting policies, summarizing controls, or scanning emerging risks.
- Agentic or autonomous AI refers to AI that, often supported by both traditional ML and LLMs, is able to perform multi-step tasks autonomously or semi-autonomously. This introduces potential new efficiencies but also new risk considerations.
Understanding these differences is only one part of the challenge. The other is to ensure that people across the organization have the skills to use these systems responsibly. An increased level of autonomy and scope of AI tools will require an increased level of oversight, and this oversight will require tailoring to fit the tool in question. Regardless of the tool in question however, firms should prioritize AI literacy at all levels of the organization. An in-depth knowledge of AI-based technology is unnecessary to benefit from these tools, but a baseline understanding of how AI tools learn and are trained, and of how they produce their output, can not only strongly enhance their use, but also facilitate the set-up of the required governance.
AI literacy in Belgium
Belgium’s AI landscape reflects a clear paradox. A 2025 Pulse Survey performed by KPMG on AI showed that among 1029 respondents in Belgium, adoption is already widespread. AI is being used across various age groups, education levels, and professional roles, including more than half of working professionals. On the surface, AI has become a normal part of daily work and personal life.
Beneath this uptake however, lies a pronounced AI literacy gap. The survey shows that most respondents have limited understanding of how AI works, struggle to choose appropriate tools, and lack confidence in evaluating AI-generated outputs. With 76% having received no formal AI training, Belgium scores significantly lower on AI literacy than global benchmarks. In practice, this means that many employees are using AI without fully understanding its limitations, risks, or appropriate applications.
For organizations, this gap has direct consequences. Governance and risk management regarding AI tools cannot function effectively without employees having the knowledge and confidence to use these tools responsibly. Without sufficient literacy, even well-designed governance frameworks may be undermined by three inherent key risks:
1. Employees bypassing rules and regulations, for example by using AI that is prohibited under the organization's policy, or entering sensitive information in public tools. In the Pulse Survey, over 50% of respondents indicated that they have pasted sensitive information into a public LLM tool at least once.
2. Employees relying on AI-generated output regarding sensitive topics without validating this output.
3. Loss of know-how about currently AI-supported processes. When process steps are automated through AI, there is a risk of not just becoming unable to perform this process without the support of AI, but also of becoming unable to validate that the AI model is still up-to-date and performing its analysis correctly. This risk of becoming a “black box” in the value chain can lead to significant financial losses for companies that deploy AI-tools with insufficient governance and know-how. Multiple historical examples of this exist, with the most drastic losses often pertaining to AI tools that independently defined purchasing and sales pricing.
The Belgian data therefore highlights the benefits of targeted AI education and practical upskilling, both in the pursuit of maximum ROI of AI deployment in the organization, and in effective mitigation of risks inherent to AI.
AI’s value proposition for risk management
While AI introduces new risks, it also creates significant opportunities for strengthening the risk function. While these opportunities are broad, they can be loosely categorized in four sectors by distinguishing between Complementary and Supplementary AI tools on one hand, and by distinguishing between the macro level and the micro level on the other hand.
Complementary AI refers to AI systems, or automation systems that are supported by AI, that operate alongside risk professionals. Almost all Agentic AI systems fall under this category. These systems are able to translate input into usable insights, or perform otherwise manual tasks, without direct human input. Risk professionals maintain responsibility over the output of these tools, but do so from an oversight perspective rather than a performer perspective. These tools require oversight not just to ensure their output can be relied on, but also that they are continuously trained over time to ensure the model adapts itself along with the field they are deployed in.
Supplementary AI refers to AI systems, or other systems that are supported by AI, that are actively used by risk professionals. Systems such as Microsoft Copilot, that can support various tasks performed by a risk professional, fall under this category. The concept of “human in the loop” is easier to maintain for supplementary AI than for complementary AI, but it remains critical that risk professionals validate the output of supplementary AI tools before relying on it.
The second differentiation is the level at which these tools are deployed in the organization. AI deployed at the macro level, in a risk context, refers to high-level or enterprise-wide tools that most often support strategic risk management. AI deployed at the micro level refers to the support of basic daily operations by risk professionals.
When plotting this on a 2x2 matrix, use cases can be identified in each quadrant. The required governance for these is dependent on their place in this quadrant, with complementary AI requiring stronger governance than supplementary AI, and the macro-level AI requiring stronger governance than micro-level AI. While the required level of governance increases, so does the required data infrastructure and maturity of the organization. AI tools become more effective and efficient with better data quality and data availability, as they depend on context that needs to be manually provided to the tool if the required data isn’t inherently available. This higher level of required governance and infrastructure does come hand in hand with a higher potential upside for the risk professional, however, with the added value shifting from ad hoc efficiency gains toward integrated improvements along the value chain. Plotting these four quadrants against a standard risk management cycle additionally allows for visualization of where this value gets added, and at what time.
Macro-level applications
- Horizon scanning
As horizon scanning, by definition, focuses on external economic developments that might impact the organization, AI tools that are able to access external news sources and scan these for key risk indicators can support a risk professional in recognizing important events ahead of time. These tools can be barebones in their deployment, with a simple GenAI prompt in a tool that has access to the internet. However, additional value can be created by further tailoring this tool in multiple ways, i.e. by automating the generation of this analysis, by introducing obsoleteness checks and predetermined warning signals, by providing organizational data to allow for a more focused analysis, etc.
Due to the importance of the horizon scanning activity, the recommended methodology is to perform this as part of the risk assessment cycle, but perform it in addition to the established processes. This allows for a low-cost, low-effort improvement to the maturity of risk identification efforts, where the initiation of the risk management cycle can rely on both human input and AI input. - Strategic risk management
A core strength of GenAI is the ability to analyze significant amounts of written data, such as policies, risk assessments, business impact analyses, etc. From here, a risk professional can augment their own task package by implementing completeness and obsoleteness checks performed by a GenAI tool, draft improvement suggestions, or draft new policies and assessments from scratch. This allows them to enhance the assessment of control frameworks and KRIs by performing these assessments in tandem with GenAI tooling.
Micro-level applications
- (Partial) Automation and tooling
AI supported automation is still a niche topic and agentic AI is still in its infancy, and both carry a strong 1st line focus in the current technological landscape. Despite this, targeted AI-tooling solutions already exist that can support an organization’s 2nd line activities:- Leading GRC-tooling providers are beginning to implement both GenAI and Machine Learning solutions in their set-up.
- For topics where sufficient data is available to train models to the required level of accuracy to implement AI-supported automation, this tooling is often already available on the market (i.e. vendor assessments, KYC/AML, etc.)
As a result, AI-based solutions for the monitoring responsibilities of risk professionals show early promise, even beyond the currently established specialized tooling.
- Supporting daily tasks
As is the case for almost every role within an organization, efficient usage of AI tools such as Microsoft Copilot can enhance both the speed and quality of work performed. Especially for a risk professional, significant value can be added by the availability of company data for this tool.
In conclusion, a significantly larger amount of value can be derived from the use of AI-tooling in risk management when taking a structured approach and considering use cases at each moment in the risk management cycle. This allows risk professionals to move from solely using AI to gain ad hoc efficiency and quality benefits, towards systematically integrating AI solutions in their value chain.
Why governance must often adapt to the tool
As organizations adopt a broader spectrum of AI applications, a one-size-fits-all risk management methodology quickly becomes insufficient. Each category of AI introduces its own risk profile and therefore requires its own governance approach. While a central AI governance model is still beneficial to define certain tool-agnostic topics such as data governance, acceptable use, etc., downstream modifications or additions need to be possible to have a governance model that enables AI use while still effectively managing its risks. If the entire governance framework is instead tailored towards the lowest risk AI tools available, oversight and internal controls will quickly become insufficient. If, on the other hand, it is tailored towards the most high-risk tools, many functionalities of other tools will be restricted or forbidden outright. While conceptually obvious, few organizations make this distinction effectively, with many instead broadly restricting the use of AI tools.
Similarly, different priorities need to be drawn up depending on the kind of tool that is being deployed:
- Public LLMs are sensitive to data leakage. As mentioned earlier in the KPMG Pulse Survey, over 50% of respondents in Belgium indicated having inappropriately entered sensitive information into a public LLM before. The more specific this information is, and the more easily this information can be linked back to the organization, the greater the risk of this being included in the LLM training data and becoming available to any and all users of the tool.
- Internal LLMs safeguard this sensitive information but shifts the responsibility of making sure these models are up-to-date, properly trained, and aligned with internal quality expectations, towards the organization.
- Machine Learning models embedded in organizational processes (e.g. pricing, KYC, AML, transaction monitoring, …) require the most robust risk management, as there is often a strong reliance on their output without human intervention at the individual sample level. Special consideration needs to be given to:
- Transparency: understanding how the model makes decisions and which inputs drive outcomes.
- Exception triggers: mechanisms that flag out-of-pattern behavior so human reviewers can intervene.
- Periodic performance reviews: ensuring the model remains accurate, fair, and aligned with policy.
- Controlled retraining: allowing the model to learn from new data without reinforcing wrong patterns or introducing bias.
As AI becomes more autonomous, these requirements intensify. The more a system can act independently, the more structure and clarity are needed around when (and how) humans stay in control.
Shared responsibility: The need for coordinated multidisciplinary ownership
Similarly to how AI tools require tailored governance, effective AI governance can also not be delegated to a single team. Given the complexity of tooling, the potential legal implications related to the EU AI Act, the embedding of the tooling into 1st line processes, etc., multiple teams need to be involved to align on all risk management requirements.
- The Business (1st line) remains responsible for the output of the tool, even when tasks are automated. They define the operational context in which the tool operates and validate the output afterwards.
- Compliance and Legal is responsible for ensuring adherence to regulatory and legal boundaries, including creating and continuously updating a register of approved AI-tools and their risk scores to be in line with the EU AI Act.
- IT and Data is responsible for maintaining the technical integrity and security of the deployed tooling. Furthermore, they control the flow of data that is indicated by the business as being relevant for tool use and training.
- Risk acts as a catalyst to define this ownership among the different teams, and to design and monitor the required controls.
All stakeholders require at least a baseline understanding of how these models function and understand their limitations. Without this, meaningful oversight cannot be guaranteed, and AI-related risks are likely to slip through the cracks when one team assumes that other teams will cover this.
Linking AI risk management to the KPMG Trusted AI Framework
As AI governance becomes more complex and tool-specific, organizations often need a structured approach to translate principles into practice and to ensure all bases are covered. KPMG developed the Trusted AI Framework for this reason, to ensure that even given the required involvement of various stakeholders within the organization, and even given the many required practices and legal obligations related to the topic, tool-oriented governance can remain efficient and structured.
The Trusted AI framework is the KPMG strategic approach and framework to designing, building, deploying, and using AI strategies and solutions in a responsible and ethical manner.
Key take aways
The breakthrough of LLMs has been one of the more disruptive technological advancements of the 21st century. Organizations have been forced to adapt to a new way of working, but also to consider future disruptions due to further technological developments in the field. These disruptions will likely differ widely depending on the sector, depending on departments, etc., meaning that risk teams are positioned perfectly to monitor these developments and ensure that the organization does not lag behind in adapting these technologies, but also to ensure that the related risks are managed effectively. This can be done through tool-specific governance, which allows organizations to deploy a wider range of AI tools, and to benefit from the full functionality of these tools. This cannot be done confidently without the required AI literacy being present within the organization, however, not just for risk professionals but also at management level and embedded across all layers of the business.
Explore
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia