The AI in Control methodology helps businesses build reliable AI solutions. The methodology involves developing, testing, and validating AI systems by integrating data, processes, controls, and people to ensure that AI systems are effective and reliable in delivering consistent and accurate results.
How Artificial Intelligence helps to reduce the risk of breaches in segregation of duties
With the introduction of generative Artificial Intelligence (AI) chatbots like ChatGPT and recent warnings from experts like Elon Musk and Steve Wozniak, the question of how to “control” Artificial Intelligence (AI) has gained attention and relevance. By describing a use-case from our experience in the industry, this blog presents our methodology for the assurance of AI systems. The application of our methodologies led to safeguarding the reliability of AI systems in the industry and improving their performance.
AI in Control methodology
AI in Control is a comprehensive methodology that focuses on developing, testing, and validating AI systems by integrating data, processes, controls, and people. This approach ensures that AI systems are not only effective in achieving their intended purposes but also reliable in delivering consistent and accurate results in a transparent and explainable manner. The methodology involves creating a robust governance structure around AI systems, establishing performance review processes, defining Key Performance Indicators (KPIs) in line with business requirements, and agreeing with the business on model interpretability requirements. By emphasizing collaboration among stakeholders and a shared understanding of AI risks and benefits, AI in Control helps businesses to build trustworthy, reliable, and understandable AI solutions.
Use case: 90% decrease in risk exposure and 20% increase of the AI’s accuracy
An industrial company, one of KPMG’s clients, has been investing in an AI application that supported the identification of potential segregation of duty SoD breaches in their primary financial processes, such as order to cash and purchase to pay. The practice of SoD involves configuring user authorizations in computer systems to restrict users from carrying out specific combinations of transactions. This is implemented to mitigate the risks of fraud and errors, such as dividing responsibilities between invoice entry and payment.
The identification and remediation of SoD breaches requires regular reviews of SoD exceptions, which is time-consuming and error-prone when done manually in globally operating companies with often thousands, or even tens of thousands of employees. The goal of the AI implementation project of the company was to automate the SoD review while improving the precision, efficiency, and accuracy of the segregation of duty approach.
The company requested KPMG to review their AI with an audit team consisting of data scientists and IT auditors, specialized in “AI risk & controls”. The review, following the “AI in Control” methodology, covered all the components of an AI system, from data extraction to model training and performance review.
The AI audit team identified major opportunities for improvement in the pre-processing of the data, by designing new features that captured more information on the SoD exceptions. The team also provided recommendations on model design, training and on the tuning of model hyper-parameters. With the help of KPMG, the client drastically boosted model performance: the result was a system that reduced the number of falsely identified cases and lowered the overall risk exposure by 90%.
In addition to examining the technical aspects of the AI, we tackled specific challenges to ensure that the system was reliable in consistently delivering accurate results. The challenges included creating a robust governance structure around the AI. We also developed a consistent internal control framework based on KPMG guidelines. This framework comprised several additional controls, such as data quality checks, and regular model performance evaluations. Our approach required close collaboration between our data scientists, the client's IT team, and their internal audit team to design and monitor the AI system effectively.
This project demonstrated the importance of employing an end-to-end methodology when developing trusted AI. It is not just about the technology; it also encompasses data, processes, controls, and people. Our approach displayed the necessity of having the right skills, expertise, and experience to develop, test, and validate the system. It emphasized the value of a shared vision and understanding of the risks and benefits of AI, as well as the critical role of collaboration among stakeholders. Ultimately, this project highlighted the importance of fostering a culture of continuous improvement and learning to ensure AI systems remain effective, reliable, and trustworthy.
This success story also demonstrates the potential benefits of AI in Control for businesses of all sizes. With the right implementation and validation by highly skilled data scientists, AI can provide significant returns by improving efficiency, reliability, and compliance.