AI is transforming the workplace, bringing both opportunities and challenges. Olivier Elst and Bart Van Rompaye from KPMG Belgium share why companies need to anticipate and act today.
With tools like ChatGPT, Copilot, and Midjourney becoming our new colleagues, we rarely pause to consider the risks artificial intelligence (AI) brings. “AI is about much more than generative tools—it can introduce serious risks,” says Bart Van Rompaye, Head of AI at KPMG Belgium. “Its impact can be profound, from individuals unfairly denied credit by AI-driven decisions to companies outsourcing critical processes to machines without proper oversight. On a larger scale, AI also affects society, such as through the spread of misinformation,” he continues.
Olivier Elst, Partner at KPMG Belgium, warns that classic risks are just as relevant in the age of AI. “Privacy, cybersecurity, and intellectual property are still key concerns. Without sufficient knowledge and oversight, you risk significant financial or reputational damage,” he explains.
AI governance audit
To prevent a proliferation of tools and ensure a consistent approach to AI within companies, KPMG conducts AI governance audits. “We start by understanding the organization’s objectives. Next, we set clear boundaries and define roles and responsibilities,” says Elst. “Many companies are looking for more structure around AI, which also makes it easier to map and mitigate risks.”
According to Van Rompaye, these risks also arise because managers are not sufficiently familiar with AI. “You can only build trust once you understand what a technology can and cannot do. I compare AI to a new junior colleague—someone I don’t know yet and whose capabilities I still need to assess. That’s why I carefully evaluate all output. A critical attitude is crucial,” he says.
This demands a shift in mindset that many companies have yet to make, he highlights. “You need to fundamentally reorient your approach. Today, our processes are designed around human limitations, and our control mechanisms are built to match. But when you automate tasks, you often find your processes aren’t equipped to handle it. As a manager, you must embrace transformation.”
Embed ethical decision-making
Once companies have a clear AI strategy, they need to develop an operational model to turn vision into action. To support this, KPMG created the Roadmap to Responsible AI. “We focus on processes, policies, and decision-making structures,” explains Van Rompaye. “For example, how do we measure the value AI generates? And what implementations are feasible? This roadmap can serve as a multi-year plan to help businesses integrate AI responsibly.”
The term 'responsible' goes beyond just ethical considerations. “While transparency is important, it’s already mandated by the European AI Act. The risk levels play a crucial role in deciding which AI applications can be deployed and which should be avoided,” he clarifies.
Elst emphasizes that embedding ethical decisions into systems ensures accountability doesn’t fall solely on individuals when something goes wrong. “Decisions are embedded in your models, and you must be able to clearly explain how they were made. This forces companies to think critically, but it’s necessary. In the very near future, AI will be everywhere and used for nearly everything. If you don’t prepare now, you’ll be confronted with the consequences and lose control of the risks.”
This article was created in collaboration with De Tijd and L'Echo.
Contact us
Explore
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia