What are some potential use cases for Gen AI?
The Day 3 panelists highlighted that numerous use cases exist throughout an organization, particularly relating to data ingestion, relationship identification and prediction. The benefits of these tools include not only completing work more efficiently and reducing repetitive tasks, but freeing up the workforce to get to the critical-thinking stage faster. These tools can open value stories through identifying new ways to solve problems.
AI governance
While use cases for Gen AI abound, the Day 3 panelists explained the need for strong data governance policies, programs and training, and including a human in the loop. They also highlighted ethical considerations relevant to the sources of data on which Gen AI models are trained and the importance of understanding whether the data is relevant to the questions the model is being asked.
Panelists on Monday discussed the importance of measuring the accuracy of Gen AI outputs and their auditability. Raghvender Arni, Amazon Web Services Director of Cloud and AI Innovation, mentioned looking to faithfulness and relevance metrics and Steve Lynch, AlphaSense EVP of Product Development, and Kristina Karnovsky, FactSet EVP and Head of Dealmaker and Wealth, echoed this sentiment by noting that companies should be able to trace outputs back to source information. To bring this to life, Karnovsky discussed use of a ‘prove it panel’ and human in the loop activities.
Has your company identified a framework for AI governance and responsible use? AI governance frameworks can be used by companies in establishing policies and procedures around how to monitor AI-executed tasks and assess the reliability of AI-produced information. At KPMG we have established our Trusted AI Framework that can be leveraged by preparers to assist in designing, building, deploying and monitoring AI use.
What did the regulators say?
On Tuesday, Cicely Lamothe, SEC Deputy Director of Disclosure Operations, Division of Corporation Finance, noted that the SEC has seen a significant increase in disclosures related to AI, with references to AI in Form 10-K almost doubling from 2023 to 2024. She discussed how disclosures highlight a company’s use of AI and the impacts of that usage on business operations and future opportunities; however, she emphasized the importance of addressing risks associated with AI in the required risk disclosures. Because risks can be diverse, Lamothe urged companies to disclose specific and tailored information, and to avoid boilerplate language. To this end, she suggested that AI disclosures address topics such as operational market dynamics, cybersecurity and data privacy discrimination and bias, litigation cost and burdens of regulatory compliance.
See remarks from the PCAOB Board and Staff on Tuesday regarding their considerations around AI here.