Skip to main content

      Banks are increasingly turning to generative artificial intelligence (Gen AI) to speed up processes, support decision-making and process complex information. This gives rise to new risks that are pushing established validation processes to their limits. Our English-language white paper, ‘Validation of Generative AI Models in the Banking Sector’, shows how banks should further develop their model risk management and which additional testing and governance mechanisms will be necessary to use Gen AI in a safer, more controlled and more regulatorily robust manner.

      Gen AI not only delivers numeric predictions but also generates text, recommendations and complex inferences. As a result, risk profiles are shifting and changing. Banks should understand what new risks Gen AI creates, how existing risks are changing, and what adequate control and monitoring mechanisms are now necessary.

      Why does the risk profile change with Gen AI models?

      Gen AI models lack transparency and can generate biased, incomplete or simply incorrect content. This poses operational, regulatory and reputational risks, for example. Unlike traditional models, Gen AI systems are less deterministic, making quality control more challenging. Validation must systematically account for the likelihood of hallucinations, bias and erroneous conclusions – regardless of how convincing the results may appear.

      What regulatory requirements are relevant to Gen AI governance?

      Legislators and regulatory authorities are raising the bar in terms of transparency and oversight. Regulations such as the EU AI Act, the AI Framework from the US National Institute of Standards and Technology (NIST) or the US Federal Reserve’s SR 11-7 supervisory guideline require traceable decisions, clear responsibilities and continuous monitoring – particularly in high-risk use cases such as lending, compliance analyses or risk assessments. For banks, this means that Gen AI should not only be powerful, but also documented, explainable and verifiable by third parties.

      auto_stories

      This enables banks to integrate Gen AI into their model landscapes in a secure, transparent and regulatory-compliant manner – download the white paper now to find out more.

      How should Gen AI validation be further developed?


      Traditional validation approaches fall short when it comes to Gen AI. Modern approaches should be tailored to the architecture and use case. These include:

      • Tests for hallucinations and systematic biases
      • Checking the quality of inputs and outputs
      • Analysis of security vulnerabilities and prompt stability
      • Evaluating agent-based systems and dynamic response chains
      • Evaluating retrieval-based architectures (Retrieval Augmented Generation, or RAG for short)

      This shifts the focus of validation towards a more process-oriented and continuous approach that integrates technical, subject-matter and regulatory perspectives.


      How does KPMG support banks with Gen AI validation?

      The white paper demonstrates how Gen AI can be systematically integrated into our proven model validation framework. It describes the analysis of the model concept, the review of the technical implementation, the assessment of input and output quality, and the creation of robust evidence for regulatory audits. The publication offers banks a practical guide on how to operate Gen AI in a controlled, future-proof and compliant manner – from implementation and monitoring through to documentation.


      Whitepaper

      This enables banks to integrate Gen AI into their model landscapes in a secure, transparent and regulatory-compliant manner – download the white paper "Validation of Generative AI Models in the Banking Sector" now to find out more.

      Computerchip mit Gehirn

      Your contact

      Matthias Peter

      Partner, Financial Services

      KPMG AG Wirtschaftsprüfungsgesellschaft