Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Acceleration of AI ups the ante on governance

Rapid advancements in artificial intelligence (AI) and the implications for governance and board oversight are front and center.

By Patrick A. Lee and Jonathan Dambrot

The era of AI has begun with startling speed. AI and machine learning are increasingly driving business decisions and activities, and pressures continue to mount—from customers, regulators, and other stakeholders—for greater transparency into how these data-driven technologies and algorithms are being used, monitored, and managed. In particular, they want to understand how companies are addressing the risks associated with AI systems—risks such as algorithmic biases in healthcare scoring and access to healthcare services; job application vetting and recruiting and hiring practices; loan credit decisions; privacy violations; cybersecurity; disinformation and deepfakes; worker monitoring; and more recently, the risks posed by generative AI.

Despite the explosive growth in the use of AI systems and increasing concerns about the risks these systems pose, many organizations have yet to implement robust AI governance processes. In a recent global survey of more than 1,000 executives by BCG and MIT Sloan Management Review, an overwhelming majority—84 percent—said that responsible AI should be a top management priority. Yet, just 16 percent of their companies have mature programs for achieving that goal.1 Notably, a recent KPMG survey found that relatively few C-suite executives are directly involved in, or responsible for, strategies to manage AI risk and data/model governance, including establishing new processes or procedures (44 percent), reviewing AI risks (23 percent), and developing and/or implementing governance to mitigate AI risk (33 percent).2

Given the legal and reputational risks posed by AI, many companies may need to take a more rigorous approach to AI governance, including (i) monitoring and complying with the patchwork of rapidly evolving AI legislation, (ii) implementing emerging AI risk management frameworks, (iii) securing AI pipelines against adversarial threats; and (iv) assessing their AI governance structure and practices to embed the guardrails, culture, and compliance practices that will help drive trust and transparency in tandem with the transformational benefits of AI. The goal is often referred to as “ethical” or “responsible” AI—that is, making AI systems transparent, fair, secure, and inclusive. Below, we offer comments on these four areas of board focus.

Monitoring and complying with evolving AI legislation.

In addition to general data privacy laws and regulations, we are now seeing the emergence of AI-specific laws, regulations, and frameworks globally. For example, the EU’s Artificial Intelligence Act appears to be on the path to becoming law, perhaps by the end of 2023. The act may set a precedent for future risk-based regulatory approaches, as it would rank AI systems according to their risk levels, and ban or regulate AI systems based on those risk levels. There is no similar legislative framework in the U.S.; however, in October, the White House released the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, which could be the basis for future AI legislation. While nonbinding, the Blueprint identifies five principles “to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” A number of other countries are developing similar non-binding principles or frameworks. Finally, various federal regulators have proposed AI-specific regulations, and there is a growing patchwork of AI-specific state and local laws. Monitoring and complying with evolving AI legislation and regulation will be a key priority for companies over the next year.

Eight core principles to guide responsible AI


1. Fairness

Ensure models are free from bias and equitable.


2. Explainability

Ensure AI can be understood, documented, and open for review.


3. Accountability

Ensure mechanisms are in place to drive responsibility across the lifecycle.


4. Security

Safeguard against unauthorized access, corruption, or attacks.


5. Privacy

Ensure compliance with data privacy regulations and consumer data usage.


6. Safety

Ensure AI does not negatively impact humans, property, and environment.


7. Data integrity

Ensure data quality, governance, and enrichment steps embed trust.


8. Reliability

Ensure AI systems perform at the desired level of precision and consistency.
Source: KPMG, The flip side of generative AI: Challenges and risks around responsible use, 2023.

Implementing emerging AI risk management frameworks.

AI risk management has been a particular challenge for many companies, and the potential use of generative AI has now created a sense of urgency. While there are various standards and best practices to help organizations manage the risks of traditional software or information-based systems, the risks posed by AI systems present new challenges. To help companies address these challenges, in January, the National Institute of Standards and Technology (NIST) published its AI Risk Management Framework, which is intended for voluntary use to help organizations address risks in the design, development, deployment and use of AI systems, and evaluation of AI systems to increase the trustworthiness of AI systems. Given the critical importance of AI risk management, boards should have their management teams assess whether the AI Framework can provide helpful guidance in building or enhancing the company’s AI risk management structure and processes.

Securing AI pipelines against adversarial threats.

Given the current AI arms race, companies need to have processes in place for securing and hardening AI pipelines against adversarial threats. In addition to ethical and bias considerations that may inadvertently come from developing AI systems, consider the threats and impacts from adversarial attacks, including data poisoning, model poisoning, back doors, insider threats, and other ways that attackers might damage the company’s decision-making systems. Indications are that adversaries are arming themselves with tools to attack AI systems and profit from a lack of humans in the loop. Frameworks like MITRE ATLAS identify threats and mitigations that can be leveraged to better prepare the organization for these attacks.

Assessing AI governance structure and processes.

Delivering on the promises of AI while managing the risks requires robust AI governance structures and processes, aligned with the company’s broader risk management, data governance, and cybersecurity governance processes. To that end, in addition to the topics discussed above, we recommend that boards discuss with management the following issues:

  • The need for (or adequacy of) a cross-functional management steering committee to establish policies and guidelines regarding the company’s development, use, and protection of AI systems and models. How and when is an AI system or model—including the use of third-party generative AI services—to be developed and deployed, and who makes that decision? Benchmark the role, composition, and policies of such a steering committee against industry best practices.
  • What AI systems and processes has the company deployed, and which are the most critical?
  • What regulatory compliance and reputational risks—including biases—are posed by the company’s use of AI? How is management mitigating these risks?
  • How is management coordinating its AI governance activities with its cybersecurity and broader data governance activities?
  • Does the organization have the necessary AIrelated talent and resources?
  • Are the company’s AI systems transparent, fair, secure, and inclusive—i.e., ethical and responsible—and consistent with the company’s purpose, values, and ESG/sustainability commitments?
  • Are the broad, potentially game-changing implications of AI—for the company’s industry, business model, and long-term viability and competitiveness—being factored into strategy discussions?

Patrick A. Lee is a senior advisor with the KPMG Board Leadership Center. Jonathan Dambrot is CEO of Cranium and a former principal of KPMG LLP.

A version of this article appears in NACD Directorship magazine.


  1. Elizabeth M. Renieris, David Kiron, and Steven Mills, “To Be a Responsible AI Leader, Focus on Being Responsible,” MIT Sloan Management Review and Boston Consulting Group, September 2022
  2. Responsible AI and the Challenge of AI Risk, 2023 KPMG U.S. AI Risk Survey Report.

Receive the latest insights from the Board Leadership Center

Sign up to receive Board Leadership Weekly and Directors Quarterly

Thank you

Thank you for subscribing. We're excited to welcome you to our community. You can now look forward to the latest news, trends, upcoming events, and thought leadership delivered directly to your inbox.

Subscribe to insights from KPMG Board Leadership Center

Board Leadership Weekly - A weekly email providing the latest news, trends, upcoming events, and thought leadership focused on the board and C‑suite from KPMG, the BLC, and other leading sources. 

Directors Quarterly - A compilation of articles, insights, and upcoming events.

Select publications you want to receive and any topics of interest below. Select all that apply.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.