GSE guidance on AI
The new rules of accountability
Our latest thought leadership introduces a new GSE guidance making AI accountability, validation, and liability a C-suite priority, outlining our phased compliance solutions, and urges organizations to act before the March 2026 deadline.
Artificial intelligence is transforming industries, but with advancement comes increased scrutiny. Recent guidance from the Government-Sponsored Enterprise’s (GSEs) new bulletin has fundamentally reshaped the landscape of AI use, elevating it from an operational advantage to a key C-suite-level risk. With a firm March 2026 deadline, organizations are now expected to meet new, non-negotiable standards for accountability, validation, and liability, including an unprecedented indemnification clause.
This isn't merely a technical update; it's a strategic imperative with direct financial implications. The shift from implicit expectations to explicit liability calls for demonstrable proof of effective oversight, continuous validation, and a defined acceptance of financial responsibility.
The challenge: A notable impact across the mortgage ecosystem
The new GSE guidance creates a notable ripple effect, impacting each participant differently:
Direct compliance & financial risk
Inherited risk & due diligence
Supply chain risk & competitive opportunity
Dive into our thinking:
KPMG solution: A phased approach to effective AI governance
At KPMG, we understand the complexities of this evolving regulatory environment. We provide a structured, phased path to not only achieve compliance but to build an effective, trustworthy AI program that creates a competitive advantage. Our services are designed to meet you where you are, guiding you through each step:
1
Assess: Your readiness roadmap
- AI governance readiness assessment: A targeted engagement to benchmark your current state against GSE guidance and deliver a prioritized action plan.
- Third-party AI due diligence framework: For capital markets firms, we help design and build frameworks to evaluate the AI risk of your partners and investments.
- Validation & testing services: Independent, technical testing of your AI models for bias, security, fairness, safety, and performance to generate the necessary evidence for continuous validation.
2
Remediate & build: Closing the gaps
- AI governance implementation: Hands-on support to develop defined policies, implement technical controls, and design effective, repeatable testing procedures.
- Centralized AI inventory: Build and maintain a thorough, enterprise-wide inventory of AI systems, creating a single source of truth for risk management, compliance, and strategic oversight.
- AI vulnerability management: Integrate an adaptive, AI-specific framework into your existing vulnerability management program to evaluate, prioritize, and remediate AI vulnerabilities.
3
Certify & operate: Building long-term trust
- AI governance managed services: For organizations focused on core business, we can operate components of your ongoing AI monitoring, testing, and reporting as a managed service.
Take action now
The March 2026 deadline is closer than it appears. Don't wait to address these important changes.
Contact us for a complimentary GSE AI Guidance readiness briefing to discuss how these new rules will impact your specific business and how KPMG can help you achieve compliance and build a trusted AI program.
Insights on cyber security
KPMG professionals are passionate and objective about cyber security. We’re always thinking, sharing and debating. Because when it comes to cyber security, we’re in it together.
Meet our team