The European Central Bank (ECB) has published the latest edition of its ECB Guide on Internal Models. The most recent internal guidance highlights how machine learning methods can be applied within internal models, provided they meet the regulatory criteria. This marks a significant step forward: AI is no longer a theoretical possibility, but an acknowledged tool for building compliant, effective, and scalable models. The implications are profound, enhancing predictive power, capturing behavioral patterns, and introducing new challenges in governance, explainability, and data infrastructure.
Governance and Oversight: Building Trust Beyond the Black Box
For years, a key concern has been the ‘black box’ nature of machine learning models. Regulators, boards, and executives need confidence that these models are not only powerful, but also transparent, accountable, and responsibly managed.
This makes governance essential. Institutions must establish clear policies for AI-driven model development and ensure these policies are embedded in all three lines of defense. Oversight frameworks should allow senior management to understand, question, and ultimately take ownership of AI models, while experts remain responsible for mitigating technical risks.
At KPMG, we see governance not as an afterthought, but as the cornerstone of responsible AI adoption in risk management.
Complexity and Explainability: Balancing Performance with Clarity
Machine learning often delivers remarkable performance improvements, but performance without transparency cannot satisfy regulatory expectations. Lenders must be able to explain why a loan was granted or denied, and risk managers must understand why a Loss Given Default (LGD) parameter shifted under a stressed scenario.
Recent advances in explainability frameworks enable institutions to open the black box, tracing model reasoning at client or portfolio level. Yet, explainability is not a plug-and-play feature; it requires deliberate design. Firms must strike a careful balance between complexity and clarity, choosing AI models that are both effective and explainable within real business processes.
In our experience, ECB’s concerns on this topic are justified. Institutions that implement robust explainability frameworks are not only more compliant, but also gain stronger trust from stakeholders and clients alike.
Data Governance and Infrastructure: The Fuel of the AI Era
AI cannot thrive without high-quality data. Many financial projects fail not because of poor models, but because of insufficient data quality, governance, or infrastructure. In this age of AI, the challenge goes beyond structured financial datasets. Institutions are now expected to process an expanding variety of information such as customer emails, call center transcripts, chatbot logs, and even branch video recordings.
Those who treat data as the ‘new oil’ and invest in robust governance, automated pipelines, continuous monitoring, and quality controls, will create a strong foundation for AI. Aligning these practices with the Basel Committee on Banking Supervision (BCBS) 239 and Digital Operational Resilience Act (DORA) principles ensures both compliance and resilience.
KPMG supports clients in building these end-to-end data ecosystems, capable not only of feeding today’s AI models, but also of scaling to future demands.
Model Development and Lifecycle: From Innovation to Business Value
Developing AI models is as much an art as a science. Choosing the right algorithm for the data, ensuring reproducibility, and linking model development to a clear business rationale are essential. Equally important is lifecycle management, ensuring that models evolve with the business and remain robust under changing market conditions
Financial institutions must carefully balance innovation, control, and resilience. The right lifecycle framework ensures that models are not one-off experiments, but trusted engines of decision-making that scale across the institution.
Validation and Audit Readiness: Ensuring Continuous Trust
Every model developer views their model as the product of hard work and expertise. Yet, no model is perfect. Rigorous validation and independent testing are crucial using not only statistical measures, but also explainability frameworks that test robustness and bias.
Audit readiness is becoming a non-negotiable expectation. Institutions must be able to demonstrate, at any time, that their AI models are transparent, tested, and compliant. This requires comprehensive validation frameworks and the involvement of both risk professionals and model developers with ongoing monitoring.
How KPMG can support
The ECB’s recognition of machine learning methods represents a turning point. It signals that AI is not a future aspiration, but an actual opportunity with immense potential for those institutions prepared to embrace it responsibly. At KPMG, we support these institutions in meeting the ECB’s evolving expectations for internal AI models. Our services span the full AI model lifecycle, covering governance, development, validation, audit, and strategic transformation.
Please contact KPMG to explore how we can help you unlock the power of AI in risk management responsibly, transparently, and in line with ECB’s latest expectations.