In 2025, anticipate repeal of the current AI Executive Order and the establishment of a new AI Executive Order focused on prioritizing AI innovation and growth across all agencies. Expect continued application of existing regulations and frameworks to AI and systems alongside a push toward “non-regulatory approaches” such as industry/sector-specific policy guidance and the use voluntary frameworks and standards (such as the NIST AI Risk Management Framework), and test/pilot programs. The Administration and regulators will continue to focus on the interplay between trusted systems and potential cybersecurity, privacy and national security risks as well as increase their focus on the nexus between AI policy and energy policy and lessen the focus on potential “AI harms”. Expect ongoing expansion of state bills/laws and legal challenges to serve as precedent for new policies and/or rulemakings; the significant volume of AI-related state activity will likely pressure Congress and the Administration to establish a federal AI policy framework.
U.S. efforts to regulate AI and systems/technologies continue to evolve largely through guidance, laws/ regulations, and enforcement to address potential consumer harm. The regulatory focus will continue to align on core principles, though may be nuanced for specific agency and state focal areas.
These core principles include:
AI and systems are deployed in a manner that:
Developers, deployers, and acquirers are responsible for clearly demonstrating:
A risk management framework covering the full AI lifecycle (design, development, use, and deployment) and requiring:
Safeguards to reinforce the reliability of AI and systems against potential risks or disruptions through:
Collection and use of consumer data comply with applicable data privacy and protection laws and regulations, and incorporate features to limit:
Data is assessed/tested for accuracy/quality, completeness, consistency, appropriateness, and validity prior to use and ongoing as part of the design and application of technological tools, promoting trust in the AI decisioning.
With the core principles as a base, federal agencies will continue to apply existing and new guidance, regulations, and frameworks toward managing the risks related to AI and systems. Multiple public-private initiatives are underway to inform (through information sharing, testing, transparency) the understanding of, and promote innovation in, AI model development and related regulatory guardrails. Related state activity is also gaining momentum at both the legislative and regulatory levels.
The focus on risk management will cover the full AI lifecycle and include:
Cross-agency evaluation of risk management practices under:
Cross-agency focus on robust, and effective governance practices, including:
Driven by the proliferation of available consumer data, the volume of data needed to train AI models and systems, and the increasing number of applications of AI and systems, regulatory attentions and enforcement will focus on:
Regulators will expect companies to demonstrate continual improvement of the risk governance/management/controls framework. Better practices are expected to evolve based on public/private information sharing (within and across organizations as well as across regulators) especially in areas such as risk management, decision making processes, responsibilities, common pitfalls, and TEVV (testing, evaluation, validation, verification).
The implementation of AI and systems is marked by complexity due to the speed of technological advancements, evolving standards, and the need for effective change management. Regulatory discord and legal challenges at the federal, state, and global levels may exacerbate these complexities.
The rapid pace of AI system development and deployment, both in-house and through third parties, requires agility in adapting to new applications of existing laws/regulations, evolving standards, and new requirements.
Legislators and regulators are looking to impose guardrails that broadly will protect consumers, financial stability, and national security from potential misuse of AI and systems. Through laws and regulations, they are looking to hold model developers, deployers, companies, boards and managements accountable for AI and system applications and outputs, placing importance on the ability to explain, and disclose as required, the:
Even when aligned on the core principles, diverging regulatory frameworks and expectations across federal, state, and/or global jurisdictions or by industry or geography, could greatly expand the complexity of both risk and compliance challenges, and necessitate a reassessment of current and target state compliance functions/approaches to compliance risk assessments. Divergences are likely to develop when:
KPMG Regulatory Insights is the thought leader hub for timely insight on risk and regulatory developments.
Points of View
Insights and analyses of emerging regulatory issues and their impact.
Regulatory Alerts
Quick hitting summaries of specific regulatory developments and their impact.
Regulatory Insights View
Series covering regulatory trends and emerging topics