Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

KPMG Trusted AI and the Regulatory Landscape

This page is frequently updated with KPMG's latest perspectives on the evolving AI regulatory environment, including:

  • Information and quotes from KPMG U.S. leaders on the EU A.I. Act
  • Information and quotes from KPMG U.S. leaders on the White House (U.S.) Executive Order on A.I.
  • Addtional resources for more perspectives from KPMG U.S. leaders on A.I. regulation
  • Information and quotes from KPMG leaders on the AI Policy Roadmap: Senate AI Working Group
  • Information and quotes from KPMG U.S. leaders on the Colorado Artificial Intelligence Act (CAIA)
  • Information and quotes from KPMG U.S. leaders on the NIST, NTIA Guidance on AI/GenAI

NIST, NTIA Guidance on AI/GenAI – August 2024

Marking 270 days since the President signed the Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, the White House announced new actions taken by federal agencies in response to the EO, including these releases from the Department of Commerce’s National Institute of Standards and Technology (NIST) and National Telecommunications and Information Administration (NTIA):

  1. Final Guidance: Artificial Intelligence Risk Management Framework: Generative artificial intelligence (GenAI) Profile (NIST AI 600-1)
  2. Final Guidance: Secure Software Development Practices for GenAI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A)
  3. Final Guidance: Global Collaboration on AI Standards (NIST AI 100-5)
  4. Draft Guidance: Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1)
  5. Report: Dual Use Foundation Models with Widely Available Model Weights (NTIA Report)

Please click here for the key features of these five publications.

Reflecting on the NIST and NTIA issuances, KPMG leaders issued the following statements:

“Marking nine months since the President signed the AI Executive Order, agencies report in on progress, and the Commerce Department, through NIST and NTIA, continues to release frameworks and guidance on AI/GenAI-related risk management and development, global plans for alignment, and best practices for managing and mitigating risks in foundation models. The NIST issuances demonstrate the swiftness of AI policy and regulation for both AI developers and deployers.”

--Amy Matsuo, Principal & Regulatory Insights Leader, KPMG U.S.

“It is encouraging to see the swiftness at which NIST continues to release AI policy and guidance.  In this recent release, the companion resource that outlines GenAI risks and suggestive actions will be particularly useful to organizations seeking to adopt and implement NIST AI RMF compliance programs.”

--Bryan McGowan, Trusted AI Leader, KPMG U.S.

The Colorado Artificial Intelligence Act (CAIA) – June 2024

The State of Colorado enacted S.B. 24-205 (commonly referred to as the Colorado Artificial Intelligence Act - CAIA) directed to persons conducting business in Colorado as “developers” or “deployers” of “high-risk artificial intelligence systems” in such areas as employment, housing, financial services, insurance and healthcare. “Developers” and “deployers” must meet certain obligations, including disclosures, risk management practices, and consumer protections. Compliance is required beginning February 1, 2026. “Like” AI-related legislative and regulatory activity is expected across states.

For more information on CAIA, please click here: AI Regulation: Colorado Artificial Intelligence Act (CAIA) (kpmg.com)

The CAIA sets a blueprint that companies should proactively prepare for, especially as we anticipate additional state legislation to follow suit. The new law adopts a risk-based tiering approach and will have significant ramifications across sectors as well as for employers who use automated decision-making systems. Strong AI governance structures, robust risk management frameworks, an AI inventory, and compliance management techniques will be crucial for businesses to maintain transparency and accountability. It is also important that businesses have clear protocols in place to notify affected consumers of potential risks, key decisions and incidents of algorithmic discrimination. If companies begin to prepare early and proactively based on these standards, they will be better equipped when additional regulations are passed.

Reflecting on CAIA, KPMG leaders issued the following statements:

“The Colorado Artificial Intelligence Act provides both developers and deployers of high-risk AI systems with only eighteen months to implement operational, risk management, and compliance changes. It is likely to see continued AI-related legislative and regulatory activity across states with the adoption of ‘like’ rulemaking.”

--Amy Matsuo, Principal & Regulatory Insights Leader, KPMG U.S.

~~~~~

“The Colorado Artificial Intelligence Act regulates AI with a focus on discrimination. Businesses that develop AI systems are looking for a clear roadmap for trustworthy AI as more states follow suit. Companies can navigate the complexity of increased AI legislative and regulatory activity by proactively preparing their organizations leveraging leading blueprints for AI governance – the NIST AI Risk Management Framework in the U.S. and the EU AI Act, globally.”

-- Aisha Tahirkheli, Trusted AI Leader, KPMG U.S

_____________________________

AI Policy Roadmap: Senate AI Working Group  - May 2024

On May 15, 2024, the Bipartisan Senate AI Working Group has released a report called "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." The policy roadmap is intended to “stimulate momentum for new and ongoing consideration of bipartisan AI legislation.”  The AI Working Group states that the Policy Roadmap summarizes findings from the Insight Forums and lays out potential policy priorities for relevant committees working on AI legislation during the current legislative session and beyond.

Examples of recommendations contained in the AI Policy Roadmap are outlined here: AI Policy Roadmap: Senate AI Working Group (kpmg.com)

Reflecting on the AI Policy Roadmap, KPMG leaders issued the following statements:

“The AI policy roadmap, based on a host of forums and hearings, helps support evolving frameworks and actions taken by regulatory agencies to help ensure the safeguarding, privacy, fairness, and security of AI. However, it also shows the complexity in this area of profound change - from securing funding, to introducing and finalizing frameworks and rulemaking, to advancing additional areas of study and measurement.  Despite this policy roadmap, expect a continued ‘piecemeal’ approach to US regulations coupled with evolving regulatory expectations in risk governance and risk management.”

-- Amy Matsuo, Principal & National Leader Regulatory Insights, KPMG U.S.

~~~~~

“With the release of the AI Policy roadmap, companies need to navigate the intricate terrain of regulation, balancing the scales of innovation with the weight of safeguarding privacy, fairness, and security. Yet, amidst this complexity, companies should embrace the journey of developing a comprehensive Trusted AI governance framework in an environment of evolving regulations, developing a framework that can scale and evolve as disparate regulations continue to be released.”

--Bryan McGowan, Trusted AI Leader, KPMG U.S.

_____________________________

The EU A.I. Act

March 2024 Update

On March 13, The European Parliament cast a plenary vote on the EU Artificial Intelligence Act. The vote represents a crucial step toward passing comprehensive AI regulation.    

The EU AI Act takes a risk-based approach to regulating AI, with a focus on high-risk applications, while also including transparency requirements for general-purpose AI models and AI-based tools like deepfakes and AI chatbots.

U.S. multi-national corporations with operations in the EU that meet the criteria of the regulation will be required to comply. For many organizations, this will likely require building compliant AI governance programs across their enterprise – and across geographic borders – due to shared infrastructure at global corporations. Non-compliance with the Act is expected to lead to significant fines, particularly for  companies deploying prohibited AI technologies.

The EU AI Act will also set a global standard, and we expect similar regulation focused on protecting fundamental rights, promoting transparency and requiring impact assessments for high-risk applications may be considered by other jurisdictions.  Provisions of the legislation are coming into effect in late 2024, with the majority of provisions not coming into effect until at least 2025.

KPMG recommends that US businesses take action now, to understand the regulation and how it applies to them, assess their risks and potential impacts, and implement the necessary governance and compliance measures within their operations to meet the emerging requirements out of the EU and ensure they are well positioned for future US regulatory requirements.

In response to the vote, KPMG leaders issued the following statements:

“The EU AI Act will have far-reaching implications not only for the European market, but also for the global AI landscape and for U.S. businesses.  It will set a standard for trust, accountability and innovation in AI, and policymakers across the US are watching. U.S. companies must ensure they have the right guardrails in place to comply with the EU AI Act and forthcoming regulation, without hitting the breaks on the path to value with generative AI.” 

-- Steve Chase, Vice Chair of AI & Digital Innovation, KPMG U.S.

~~~~~

"This vote on the EU AI Act marks a significant milestone in shaping the global landscape of Trusted AI regulation, which, once mature, will foster public confidence and ensure ethical practices in AI development. The role of rule-makers in driving trust and responsible innovation is essential and aligned with KPMG’s ongoing commitment to designing, building and deploying AI systems in a reliable, ethical and collaborative manner."

-- Tonya Robinson, Vice Chair and General Counsel - Legal, Regulatory and Compliance, KPMG U.S.

~~~~~

“The introduction of the EU A.I. Act serves as a clear signal that any US multinational operating in the European market should be proactively considering and preparing for these new regulations. The Act underscores the importance of fundamental rights impact assessments and transparency, which will undoubtedly impact how businesses operate and deploy AI technologies. By embracing these regulations and incorporating responsible AI practices, US multinationals can not only ensure compliance but also foster trust and maintain a competitive edge in the European market.

“Now is the opportune moment for U.S. businesses to adopt Trusted AI programs in response to ongoing global and U.S. regulations. These programs will enable companies to swiftly evaluate and account for the risks and vulnerabilities. It is imperative for organizations to progress from mere planning to the practical implementation of ethical principles, establishing responsible governance, policies, and controls that align with leading frameworks and emerging regulations. The integration of responsible and ethical AI should be ingrained throughout the entire AI lifecycle.”

--Bryan McGowan, Trusted AI Leader, KPMG U.S.

~~~~~

“The EU AI Act is a landmark regulation, not just in the EU but globally - similar to the adoption of GDPR.  Whether at the state or Federal level, US policymakers will surely consider a range of foundational principals contained within the EU AI Act – fairness, explainability, data integrity, security and resiliency, accountability, privacy, and risk management – under existing rules and authorities as well as in future legislation and regulation. The time for simply establishing sound risk governance and risk management AI programs is quickly passing – the time for implementing, operationalizing, demonstrating and sustaining effective risk practices is now.” 

-- Amy Matsuo, Principal & National Leader Regulatory Insights, KPMG U.S.

_____________________________

The EU A.I. Act

December 2023 Update

In December 2023, The European Council and Parliament reached a historic agreement on the world's first set of rules for artificial intelligence (AI). The Artificial Intelligence Act aims to establish a regulatory framework for the development and use of AI that prioritizes safety, transparency, and accountability. The act will apply to a wide range of AI applications, including those used in healthcare, transportation, and public safety.

The act sets out strict requirements for AI developers and users, including mandatory risk assessments and transparency obligations. It also establishes a European Artificial Intelligence Board to oversee the implementation of the act and provide guidance on AI-related issues. The agreement represents a significant step forward in the regulation of AI and the promotion of responsible AI development and use, and is expected to have a major impact on the global AI industry.

The EU AI Act is expected to have a significant impact on US-based companies, particularly in terms of compliance costs, strategic business shifts, and balancing transparency with intellectual property protection. The act mandates a risk-based approach to regulation, requiring businesses to navigate a range of requirements and obligations depending on the AI system's risk classification. Companies using AI systems in prohibited areas may need to reassess their product or business strategies or pivot, adjust, or discontinue their products or services accordingly. Additionally, the act imposes a significant administrative burden on companies, requiring thorough documentation and measures taken for oversight and control for high-risk AI systems.

To navigate these challenges, KPMG recommends that US businesses closely monitor regulatory developments, assess their risks and potential impacts, and implement necessary governance and compliance measures within their operations.

Reflecting on the new proposed law, KPMG issued a number of statements:

“The provisional EU A.I. Act will establish comprehensive guardrails on the AI highway, and it will influence what’s to come. Organizations must implement Trusted AI and modern data strategies in order to go faster with confidence. 

 "The EU’s landmark Artificial Intelligence Act marks a pivotal moment for U.S. businesses navigating the evolving AI landscape. We anticipate it will be extremely influential on the AI regulatory environment, reaching far beyond the tech sector, akin to the impact of GDPR on data privacy. 

 “Companies should be monitoring what is happening in the EU closely to assess potential impacts and implement the necessary governance and compliance measures. Rapid assessment of emerging regulatory challenges will enable organizations to reduce disruption, minimize risks and ensure a smoother adoption of AI. While it will require up-front investment in Trusted AI programs and sophisticated data governance, this preparation will accelerate greater, sustained returns. 

 “KPMG’s global Trusted AI commitment guides our firm’s aspirations and investments in AI, and its ethical pillars align closely to emerging requirements in the EU A.I. Act. However, KPMG’s Trusted AI approach is not limited to addressing regulatory compliance. It was purpose-built to accelerate the value of AI for our clients and our firm while serving the public interest and honoring the public trust."

-- Steve Chase, Vice Chair of AI & Digital Innovation, KPMG U.S.

~~~~~

“While the trust of our clients and our people is always the north star for our AI aspirations, the provisional EU A.I. Act underscores just how pivotal public confidence will be to any successful AI investment. KPMG’s Trusted AI ethical pillars prioritize Accountability, Safety, Data Privacy, Transparency and Fairness, all well aligned to the EU A.I. Act’s stated intentions."

-- Tonya Robinson, Vice Chair and General Counsel - Legal, Regulatory and Compliance, KPMG U.S.

~~~~

“With continued global and US regulation on the horizon, now is the time for U.S. businesses to implement Trusted AI programs to quickly assess and understand risks and exposures. Organizations must move from planning to operationalizing ethical principles into practice and institute responsible governance, policies and controls aligned to leading frameworks and emerging regulations. Responsible and ethical AI must be embedded by design across the AI lifecycle.

“The provisional EU A.I. Act, as agreed on December 8, underscores the importance for companies to invest in fundamental rights impact assessments to better understand the potential impact of their actions, policies, and technology use on the basic rights of their customers, employees and broader society. Companies using AI systems in high-risk or prohibited areas may need to reassess product or business strategies.

 “As with any new regulation, there will be complex issues that business must navigate. The EU A.I. Act is on track to set a high bar for transparency and disclosure. This transparency is critical to maintain trust in AI, but it will also require businesses to find the right balance between disclosure and protecting trade secrets."

-- Bryan McGowan, US Trusted AI Leader, KPMG U.S.

_____________________________

The White House (U.S.) Executive Order on A.I.

On October 30, 2023, the White House announced an Executive Order on AI, aimed at establishing a framework for the development and use of AI that prioritizes safety, security, and privacy. It also calls for the creation of a National AI Advisory Committee to provide recommendations on AI-related policies and research priorities.

The executive order emphasizes the importance of transparency and accountability in AI development and deployment, particularly in areas such as healthcare, transportation, and national security. It also directs federal agencies to prioritize the development of AI technologies that promote economic growth and job creation while minimizing potential negative impacts on workers and communities. Overall, the order aims to take a significant step forward in the regulation of AI and the promotion of responsible AI development and use.

In response to the news, KPMG released a number of statements:

“The announcement of an Executive Order to promote safe, secure and trustworthy AI, only furthers the need for organizations to balance innovation, efficiency, and value with appropriate considerations and safeguards to govern, secure and operate AI.” 

-- Bryan McGowan, US Trusted AI Leader, KPMG U.S.

~~~~~

“Organizations will without a doubt be looking to this Executive Order for broader signals on where the U.S. regulatory landscape is headed. To adhere to current requirements and prepare for future regulations, leading companies are instituting Trusted AI programs that embed clear guardrails across the organization and continually adapt to address new, evolving risks. With the right governance, policies, and controls, organizations can strike the right balance between being bold, fast and responsible to accelerate the value of AI with confidence.”

-- Steve Chase, A.I. and Digital Innovation Vice Chair, KPMG U.S.

~~~~~

“As AI – and generative AI – continue to gain momentum, the concerns around ethics, data, and privacy related to these technologies are clearly priorities of the Administration. While efforts to regulate AI both domestically and internationally progress, tech companies will need to be vigilant about continuously assessing their approach to R&D and deployment of their products and services that both meet regulatory expectations and maintain their competitive edge.”

-- Mark Gibson, Global and U.S. Technology, Media & Telecommunications Leader, KPMG

~~~~~

“As the Executive Order reaffirms, AI, including GenAI, cuts across principles of safety and security, privacy, civil rights, consumer and worker protections and innovation and competition. Assessing both rewards and risks will be critical to innovation while maintaining trust. Legislative and regulatory debates both domestically and internationally need to be monitored closely as these evolve. And companies must now set appropriate risk and compliance guardrails, recognize ‘speedbumps,’ and leverage industry-based and other best practices, following robust risk standards around such areas as testing, authentication and outcomes. Some regulators have already made it clear that existing authorities and regulations apply to ‘automated systems,’ including algorithms, AI, predictive analytics and innovative technologies – the time for sound risk governance and risk management is now.”

-- Amy Matsuo, Principal & National Leader Regulatory Insights, KPMG U.S.

close
Contributors
close
Media contacts

Additional resources

KPMG Trusted AI
Webcast Replay Webcast Upcoming Listen Now

KPMG Trusted AI

At KPMG, we are committed to upholding ethical standards for AI solutions that align with our Values and professional standards, and that foster the trust of our clients, people, communities, and regulators.

Webcast Replay Webcast Upcoming Listen Now

Landmark Actions Coming: The AI Act and Growing US Regulations

“Whole-of-government” actions increasing as agencies intensify their focus on safe, secure, and trustworthy AI/GenAI

trusted ai
Webcast Replay Webcast Upcoming Listen Now

Trusted AI services

Unlock the vast potential of artificial intelligence with a trusted approach.

Webcast Replay Webcast Upcoming Listen Now

Decoding the EU AI Act

A guide for business leaders

Explore more

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline