With increasing availability of data, enhancements to computational power coupled with growing interest in Artificial Intelligence (AI) and Machine Learning (ML), AI/ML adoption in financial services is expected to continue to increase and play a more prominent role. While regulators around the world are actively pursuing safe, sound, and responsible ways for this adoption, it is critical to strike a balance that fosters innovation without compromising model risk management (MRM).
One of the first questions financial institutions should ask themselves when building and implementing AI/ML technology is whether it meets the organization’s definition of a model. More times than not, the answer is yes given the conventional components of a model (input, calculation, and output).
While there is wide debate about whether AI/ML technology should be managed through the existing MRM framework, existing MRM prudential guidelines (e.g., FRB SR 11-7/OCC 2011-12, FHFA AB 2013-07, etc.) offer a strong foundation for managing the key risks associated with AI/ML models and thus should serve as the starting point for governing these models.
Our new article dives into perspective on how the current MRM framework, as detailed by SR 11-7, can be further strengthened to effectively address key challenges related to AI/ML models.
How risk and compliance can accelerate generative AI adoption
Harness the power of generative AI in a trusted manner
Governing AI responsibly
Discover how risk professionals can develop an effective AI governance model
Where will AI/GenAI regulations go?
Demonstrating 'trusted AI systems'