On 1 October 2024, the Board Leadership Center welcomed Öztürk Taspinar, Partner at KPMG in Belgium; Annelies De Corte, Director at KPMG in Belgium; Marijke Schroos, General Manager at Microsoft BeLux; Stephanie Cox, Former Managing Director at Proximus Ada; and Jan Smedts, CEO at Digitaal Vlaanderen, for a discussion on how artificial intelligence (AI) is reshaping the business landscape and how boards can lead their organizations through this transformative journey.

AI is not a new topic, but recently it’s gained a lot of traction and is high on many companies’ and boards’ agendas. However, the challenges have shifted over time and today they are more transversal, spanning from risk management to change management. With various inputs to consider, how should an organization tackle AI transformation?

Structure the journey

Many organizations start with AI use cases to look for value and build the business case for AI. Starting in this way enables the building of trust within organization. But, then what? How do you move from use case to embedding AI in the business? How do you think about the risks, or ethical concerns? AI transformation is a journey that includes strategic reflection, setting an AI roadmap and target operating model (TOM), consideration of the risk management and compliance, as well as change management.

Strategic reflection

An important first step is to look at a more strategic view on how AI will impact your value drivers and business strategy. To enable this conversation, boards and leadership might ask questions such as:

  1. How do we envision incorporating (generative, or “gen”) AI into our corporate strategy process and operating goals going forward?
  2. What near- and longer-term benefits to the company and its strategy are posed by (gen)AI?
  3. Assuming our customers, competitors, and suppliers are also rolling out (gen)AI, what would that do to our company’s revenue and cost over the next one, three, five years? What revenue is at risk? What new revenue can be generated? What costs will be reduced? What price pressure or opportunity does the company see?

Which new offerings are we planning to take to the market?

(Data &) AI strategy

Your data and AI strategy should be shaped to support your business strategy.

Start by formulating a clear AI ambition for the coming 3-5 years that’s intricately linked to your business strategy. Reflect on the KPIs you will use to follow-up on the progress you’re making. Ensure you have the right data governance in place. There’s enormous potential if you can combine and leverage data sources in the right way. However, it will be difficult to reach a bold AI ambition with a low data maturity.

In this stage, you might consider the following questions:

  1. Who in management is on point for driving and coordinating the (gen)AI transformation, and how is work distributed across different C-level executives?
  2. What is our companies' ambition for AI?
  3. How much has the company invested in (gen)AI this fiscal year and how much will need to be budgeted for next year?
  4. How will (gen)AI be used in the different supporting functions and in the core of the operations?
  5. Which measurable value should this translate into this year?
  6. How are we following up on value realization?

AI roadmap

In shaping your strategy, it’s important to formulate an AI roadmap that’s integrated into your broader portfolio. To do so, you might think about:

  1. Which initiatives do we have on our roadmap?
  2. What measurable productivity improvements should this translate into?

AI Target Operating Model (TOM)

Who will do what, in which way, within the organization with regards to AI? 

AI Target Operating Model pie chart

In reflecting on the set-up of your AI TOM, there are multiple factors to consider:

  • People: Will you have a central AI team, or will they sit under an existing team?
  • Processes: for employees to follow (e.g., in designing new use cases)
  • Performance & value management
  • Service delivery model: Will you build or buy? The latter may be a more appropriate strategy for the so-called “everyday AI,” which aims to increase productivity, while you might think about the former for your “game-changing AI,” which aims to increase your competitive advantage.
  • Data, technology & architecture
  • Governance should be in place to ensure you build or deploy AI in trustworthy way.

To enable this conversation, consider the following questions:

  1. Has management considered appointing a Chief AI Officer (CAIO) to spearhead the change?
  2. How are we organized to successfully identify, build, and deploy AI?
  3. Have we connected our (gen)AI tools to our own proprietary data?
  4. Which data are algorithms being trained on?
  5. Who owns the data, and how is the company monitoring for quality and bias?
  6. Do we need to recruit new or other profiles?
  7. Where are strategic/tactical/operational decisions on AI taken and who is consulted?
  8. How will this impact our strategic workforce planning?

AI risk management & compliance

The European Union AI Act applies to any provider placing an AI product or service on the market within the EU and all deployers of those products and services in the EU. The Act takes a risk-based approach to AI systems and establishes clear transparency obligations.

One key challenge for companies is creating an exhaustive list of applicable AI systems. Particularly within the “High Risk” category, more AI systems may apply here than companies initially expect.

AI risk management & compliance pyramid with different level of risk

Cybersecurity is another increasingly crucial factor in risk management of AI. The two are increasingly linked as the advancements in one put pressure on the other. Cyberattacks are becoming more frequent and more sophisticated as AI automates the attacks, learns from experience, and makes it more accessible to those with less technical knowledge. At the same time, AI can be used to develop better algorithms that improve security. One company also created AI chatbots to guide employees through their cybersecurity processes.

Another crucial step is AI model validation and documentation. Validation here requires a slightly different approach than more traditional models, which are example-given and rule-based.

To enable discussions around risk management and compliance, boards and leadership could ask questions such as:

  1. What are the major AI-related risks that we need to tackle first?
  2. Which (gen)AI governance framework and policies have we implemented already and what comes next?
  3. Are the company’s guardrails and compliance practices sufficient to help drive trust and transparency in tandem with the benefits?
  4. How have we increased our cybersecurity over the last 12 months since (gen)AI arrived?

Learn more about the AI Act and the implication for boards.

Change management

We have all heard the phrase, “culture eats strategy for breakfast.” The case for change is a critical component of your AI journey. It should be created early on and communicated often throughout the organization to ensure that all employees are bought in and to build trust in the AI systems.

Key questions to raise include:

  1. Who is leading the change?
  2. What is our “case for change?”
  3. How are we preparing and upskilling our employees?
  4. Which strategies are we implementing to manage resistance and ensure buy-in from key stakeholders?

Sustainability of AI

While AI is one hot-topic on board agendas, sustainability is another. Given the energy demands of AI systems, it’s important to also consider how you are building your AI in a sustainable way. Measuring the energy demands of your AI systems is one starting point.

However, it’s important to note that while AI systems may require a lot of power, they can also be instrumental in solving other issues, such as grid optimization, asbestos detection in roofs, or disease detection in forests.

As the marginal costs of intelligence decrease, the transformative power of AI increases. It’s the responsibility of leadership to assess what they will and won’t use it for.

Also read our boardroom questions on AI in the Boardroom.

Tips, tricks, and lessons learned

Transitioning to an AI-enabled organization presents unique challenges and opportunities, requiring strategic foresight and governance that align technological advancement with long-term business goals. Organizations are in different stages of their AI journey. To help you learn from what others have done already, from fostering a culture of data-driven decision-making to managing ethical and regulatory concerns, here are a few top tips and tricks:

  1. Hurry up! Start to think about the impact that AI will have and get started on your journey. The regulatory environment in the EU does not need to be a competitive disadvantage. On the contrary, it can protect individuals, enable the use of technology for the better and put guardrails in place. However, as one panelist put it: “regulation should not only be well-intended, but well-executed.”
  2. AI is an extremely powerful tool, but it’s not a magic wand. It’s a technology that needs to fit into your business strategy. Think about which tools will add the most value to the company and best align strategically.
  3. AI is not only for the technology team. Involve business stakeholders early and manage expectations. Working together not only ensures that use cases are fit-for-purpose, but it builds the trust in the AI systems needed to enable adoption.
  4. People tend to over-estimate the complexity and time needed for the development of AI models, and under-estimate the time needed for everything else. Implementing new processes, change management, monitoring over time, and measuring progress are all important parts of successful AI transformation.
  5. If you want to scale, ensure the right governance is in place. For example, one company implemented legal checks to ensure that the data used in an AI model is not discriminatory. Another has a policy that if one person developed an AI system, it requires additional testing. Some organizations block certain free AI tools to minimize risks.
  6. Educate everyone in the organization. Ensure that those who are using AI tools trust the tools and are using them in the right way. People should be appropriately skeptical and be able to make the right decisions. Remember, where no approved internal tool exists, people may turn to free tools. It’s important that they understand the risks, particularly when it comes to sharing proprietary or sensitive data. Middle management and people leaders can play a significant role in here in educating their teams.
  7. Diversity is important. Diverse teams working on AI will lead to more fair and inclusive outcomes.
  8. Use it for “the better.” Define what that means to your organization, including consideration of any ethical questions relevant to your business and AI use cases. 

About the Board Leadership Center

KPMG’s Board Leadership Center (BLC) offers non-executive and executive board members – and those working closely with them – a place within a community of board-level peers. Through an array of insights, perspectives, and events – including topical seminars and more technical Board Academy sessions – the BLC promotes continuous education around the critical issues driving board agendas.