2023 saw major advances in the development and use of generative AI and its ability to create new, original content, such as text, images, and videos. Indeed, generative AI has been the focus of discussion in many boardrooms as companies and boards seek to understand the opportunities and risks posed by the technology – a challenge given the pace of the technology’s evolution.

The potential benefits of generative AI vary by industry but might include automating business processes such as customer service, content creation, product design, developing marketing plans, improving healthcare, and creating new drugs. However, the risks posed by the technology are significant, including inaccurate results, data privacy and cyber security risks, intellectual property risks (including unintended disclosure of the company’s sensitive or proprietary information and unintended access to third-party IP), as well as compliance risks posed by the rapidly evolving legislation globally.

Given the strategic importance of generative AI to most companies, boards should be monitoring management’s efforts to design and maintain a governance structure and policies for the development and use of generative AI.

Think about:

  • How and when is a generative AI system or model – including a third-party model – to be developed and deployed, and who makes that decision?
  • How are the company’s peers using the technology?
  • How is management mitigating the risks posed by generative AI and ensuring that the use of AI is aligned with the company’s values? What generative AI risk management framework is used? What is the company’s policy on employee use of generative AI?
  • How is management monitoring rapidly evolving generative AI legislation, and ensuring compliance?
  • Does the organization have the necessary generative AI-related talent and resources, including in finance and internal audit?

Boards should also assess their governance structure for board and committee oversight of generative AI. In addition to the full board’s engagement in overseeing AI, do (should) certain committees have specific oversight responsibilities, including perhaps taking deeper dives into certain aspects of generative AI?