Artificial intelligence (AI) holds great promise, and there’s much excitement around exploring use cases—especially for generative AI. As with all attempts at innovation, some are very successful and exceed expectations, while some are not. Surprisingly, when we see AI use cases fail, it’s often because governance, not the technology, has failed. That’s why audit committees play a key role in helping AI use cases flourish. By providing oversight of AI deployment, committees can help build trust and control risk, leading to smoother and more widespread adoption and sustained viability.
Although AI has existed for many years, its use has exploded with the recent widespread adoption of generative AI, which can, through user-inputted prompts, create original content such as text, images, audio and video. Now, 59 per cent of Canadian organizations are allocating more than 10 per cent of their IT budgets toward AI, and the adoption of generative AI is growing at a pace that will see half of Canadian workers using it by 2026, according to KPMG research.
Generative AI is being deployed across the front, middle and back offices as proprietary solutions and through SaaS providers who are increasingly embedding it in their products. Most deployments are either productivity tools that perform tasks such as transcribing or writing emails, or knowledge-based tools that take a current data set and use generative AI upfront to answer queries.1,2
59% of organizations currently allocating more than 10 per cent of their IT budgets towards AI technology
AI has arrived in finance and audit
AI is being widely applied in financial reporting and audit, with 87 per cent of Canadian organizations piloting or using it in their financial reporting and 100 per cent planning to adopt its use for at least one of their financial reporting processes, according to KPMG’s AI in financial reporting and audit: Navigating the new era.3 It’s being used to improve data quality, analyze trends, forecast and make better data-enabled decisions.
Algorithms are also being developed and used to audit financial statements and can be helpful when reports from many previous years or many subsidiaries must be analyzed. Despite widespread corporate use of AI being relatively new, three-quarters of organizations already believe it’s at least moderately important that their external auditor uses it.4
Audit committees must ensure they understand how AI is being used in the financial and audit functions and satisfy themselves that management and auditors know how it’s being employed. A value framework for the effectiveness of an AI deployment should be established and monitored, including key performance indicators (KPIs). These need to be reported to the committee regularly, and AI should be a standing item at board and committee meetings.
Management needs to explain to the committee how they’re ensuring inputs into AI algorithms are correct and reliable and that this data—and the outputs derived from it—are being adequately protected. Committees will need to question management about what precautions they’re taking to ensure that when generative AI is used to access data, it provides only the necessary data and only to authorized users.
Committees should also understand how their auditors use automated tools in their audit and how they incorporate AI, including how the AI algorithm was trained and tested to reach its conclusions and how the auditor has ensured that confidential reporting issuer information has not been shared.5
87% of Canadian organizations are already piloting or using AI in their financial reporting
Trust is essential
Employee attitudes toward AI vary from excited to hesitant, so trust is imperative to ensure adoption. To be trustworthy, AI solutions must be transparent, explainable and unbiased against individuals or groups. The data they use needs to be complete, accurate, appropriate and gathered in compliance with all applicable laws and regulations. AI solutions must be designed and implemented safely—with data protection policies and procedures in place—and pose no threat to people, businesses or property. Audit committees should be questioning management about how they’re ensuring the trustworthiness of AI deployments. This oversight can, in turn, help build trust and foster greater comfort with using AI.
Talent management practices can also be instrumental in fostering AI adoption. While acquiring new talent with AI expertise is necessary and is the focus of AI-related talent efforts for most organizations, it’s also imperative to train users on how to use AI applications. Management, board members and the audit committee must also be sufficiently educated to understand the solutions employed and what they need to look for from a governance standpoint.
Rethinking third-party risk management
Many organizations are adopting a hybrid model of vendor platforms and internally built platforms developed around available open-source models to deploy AI. One of the main gaps we’ve found in reviews is the insufficiency of existing third-party risk management (TPRM) for dealing with vendors’ AI use. It’s common practice to assess the risk of new vendors and then reassess them at regular intervals according to their level of risk. For example, management can supplement their established third-party testing and review processes by requesting a System and Organization Controls (SOC) report to understand if the right controls are in place. SOC reports remain uncommon in the AI space given a current lack of established guidelines, but their use is expected to grow in the future.
A low-risk vendor might be reassessed every few years, but if it starts using AI in its SaaS product or changes how it’s using AI, the current TPRM might not have a trigger point to alert the client organization to this change and spur it to initiate an examination. TPRM policies must be redesigned to include trigger points that allow the organization to revisit before the scheduled reassessment. Audit committees must ask management how they’re adapting their TPRM policies around vendors’ AI use.
Regulation is coming
Although AI has existed for many years, regulations have lagged behind its progress. There are no mandatory AI regulations in Canada, but the Canadian government has proposed the Artificial Intelligence and Data Act (AIDA).6 As part of Bill C-27, AIDA is currently under consideration in committee in the House of Commons, so its final regulations and date of implementation remain uncertain.7 In the meantime, the Canadian government has issued the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.8
The Office of the Superintendent of Financial Institutions (OSFI) has also proposed regulations in the form of Draft Guideline E-23 – Model Risk Management.9 Consultations have closed on the proposed guidelines, but the timing of the final guidelines remains uncertain. Audit committees must ensure management monitors potential AI regulations and prepares for implementation.
In addition to the voluntary code, organizations looking for guidance on pending regulations can look to the EU AI Act, which came into force in August 2024 and will be rolled out in stages over the next two years.10 It’s likely to lead the way for global AI regulation the way the EU’s General Data Protection Regulation (GDPR) led the way in privacy and data protection. Audit committees should ensure that they and management are familiar with the principles upon which the EU AI Act is based and that the use of AI in the organization adheres to these principles.
While audit committee oversight of AI will be necessary and important, it’s also a good practice to form an AI governance committee. This committee should include members familiar with technology, legal, privacy and operational issues, as well as a member familiar with the marketing department, as this is an area where there’s potentially significant risk from generative AI and how it interacts with customers.
Some managers and employees may feel that questioning by the audit committee hinders creativity, innovation and progress. However, ensuring AI is being adopted in a trustworthy manner, that it adheres to coming regulations and that vendors are adequately monitored encourages adoption across the organization and prevents deployments from being stalled or abandoned. By contributing to this oversight, audit committees can help ensure the successful deployment of AI.
Explore more insights from the Accelerate series
Insights and resources
Connect with us
Stay up to date with what matters to you
Gain access to personalized content based on your interests by signing up today
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia
- KPMG in Canada. “AI in financial reporting and audit.” Accessed October 31, 2024.
- KPMG in Canada. “AI for every business.” Accessed October 31, 2024.
- KPMG in Canada. “AI in financial reporting and audit.” Accessed October 31, 2024.
- Ibid.
- Canadian Public Accounting Board. “CPAB Audit Quality Insights Report: 2024 INTERIM INSPECTIONS RESULTS.” Accessed October 31, 2024.
- Government of Canada, Innovation, Science and Economic Development Canada. “The Artificial Intelligence and Data Act (AIDA) – Companion document." Accessed October 31, 2024.
- Parliament of Canada. “Digital Charter Implementation Act, 2022.” Accessed October 31, 2024.
- Government of Canada, Innovation, Science and Economic Development Canada. “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.” Accessed October 31, 2024.
- Office of the Superintendent of Financial Institutions. “Draft Guideline E-23 – Model Risk Management.” Accessed October 31, 2024.
- European Commission. “European Artificial Intelligence Act comes into force.” Accessed October 31, 2024.