Skip to main content

      Artificial intelligence (AI) is revolutionizing business processes, decision-making, and customer interaction. However, AI transformation comes with a number of risks, including data breaches, algorithmic bias, ethical dilemmas, and regulatory sanctions. AI compliance encompasses all measures that ensure the legally compliant, ethically acceptable, and responsible use of AI systems.

      The EU AI Act and other regulatory requirements, such as the General Data Protection Regulation (GDPR), the Digital Operational Resilience Act (DORA), and the BaFin guidelines, impose clear obligations on financial companies. These include risk classification, documentation, governance structures, and monitoring obligations for AI applications to ensure their verifiability, non-discrimination, and responsible use.

      Our comprehensive range of services

      AI Governance

      The responsible and legally compliant use of AI requires clear structures, processes, and responsibilities. With our holistic approach, we support companies in efficiently meeting the requirements of the EU AI Act and other regulatory requirements without limiting their ability to innovate.

      Our structured and practical approach enables companies to ensure compliance while taking advantage of new opportunities for growth and competitive advantages.

      These are our core building blocks: 

      Anchoring responsibility in an independent unit

      We help establish a central, independent governance function as a single point of accountability for all AI-related issues. This unit creates clarity and accountability, ensures transparency, consistent processes, and compliance with regulatory requirements – and helps build trust in AI initiatives.

      Monitoring and committee structures

      To ensure continuous control, we provide support in setting up monitoring bodies and steering committees. These structures enable regular risk assessment, the approval of new use cases, and coordination with data protection, IT security, and specialist departments.

      Design of guidelines and standards
      • Definition of AI usage guidelines for all areas of the company 
      • Regulations for risk assessment and model validation 
      • Processes for audit and compliance checks

      Anchoring responsibility in an independent unit

      We help establish a central, independent governance function as a single point of accountability for all AI-related issues. This unit creates clarity and accountability, ensures transparency, consistent processes, and compliance with regulatory requirements – and helps build trust in AI initiatives.

      Monitoring and committee structures

      To ensure continuous control, we provide support in setting up monitoring bodies and steering committees. These structures enable regular risk assessment, the approval of new use cases, and coordination with data protection, IT security, and specialist departments.

      Design of guidelines and standards

      • Definition of AI usage guidelines for all areas of the company 
      • Regulations for risk assessment and model validation 
      • Processes for audit and compliance checks

      AI Compliance Lifecycle

      A structured lifecycle for AI compliance is the basis for the responsible and legally compliant use of AI systems.

      Without clear processes for identifying, registering, and evaluating AI applications, there is a risk of "hidden" AI use, regulatory violations, and a lack of transparency.

      The EU AI Act requires systematic control of all AI systems in use, especially for high-risk applications.

      Our approach comprises four key steps: 

      Detection and Identification

      We provide support for the automated detection of AI components in all systems – for example, through IT monitoring and targeted analyses. In addition, a mandatory reporting process for new AI applications ensures that hidden use is avoided and companies retain a complete overview.

      Classification

      We support you in risk classification of your AI systems, from prohibited to high-risk to minimal-risk applications. With the help of self-assessments and decision matrices, the classification can be documented in a comprehensible manner, even when reclassifying system changes.

      Registration

      All identified AI systems are recorded in a central register, ideally automated and integrated into existing IT service management processes. This creates a robust database for governance, risk analysis, and audits.

      Compliance Check

      Finally, we align your AI systems with regulatory requirements, such as those set out in the EU AI Act, DORA, ISO/IEC 42001, or internal guidelines. We ensure a comprehensive and efficient audit through structured catalogs of measures and integration into your existing compliance management system.

      Detection and Identification

      We provide support for the automated detection of AI components in all systems – for example, through IT monitoring and targeted analyses. In addition, a mandatory reporting process for new AI applications ensures that hidden use is avoided and companies retain a complete overview.

      Classification

      We support you in risk classification of your AI systems, from prohibited to high-risk to minimal-risk applications. With the help of self-assessments and decision matrices, the classification can be documented in a comprehensible manner, even when reclassifying system changes.

      Registration

      All identified AI systems are recorded in a central register, ideally automated and integrated into existing IT service management processes. This creates a robust database for governance, risk analysis, and audits.

      Compliance Check

      Finally, we align your AI systems with regulatory requirements, such as those set out in the EU AI Act, DORA, ISO/IEC 42001, or internal guidelines. We ensure a comprehensive and efficient audit through structured catalogs of measures and integration into your existing compliance management system.

      AI Comliance Livecycle

      Fundamental rights impact assessment

      With the entry into force of the EU AI Act, the Fundamental Rights Impact Assessment (FRIA) will become mandatory for all companies that use high-risk AI systems, for example in lending, fraud detection, or risk assessment. The aim is to identify and minimize risks to fundamental rights such as data protection, equal treatment, and transparency at an early stage.

      In addition to the fundamental rights impact assessment for the use of high-risk AI systems, the GDPR requires a data protection impact assessment (DPIA) for data-processing applications. Both procedures serve to protect individuals – for example, from discrimination, data misuse, and non-transparent decisions. In practice, FRIA and DPIA can be combined efficiently.

      Relevance of systematic AI compliance

      Conducting GRFA and DSFA in parallel enables a holistic assessment of the risks posed by AI systems, both in terms of fundamental rights and personal data. It provides clarity for supervisory authorities, minimizes liability risks, and strengthens the trust of customers and employees. 

      Our support

      We offer an integrated approach that systematically covers both requirements:

      • Scoping and risk analysis: Identification of relevant AI systems and assessment of their impact on fundamental rights and data protection.
      • Methods and templates: Use of proven assessment grids, decision matrices, and documentation templates.
      • Interdisciplinary expertise: Collaboration between lawyers, data protection officers, and AI experts.
      • Auditability and evidence management: Creation of verifiable documents for internal governance and external audits.
      • Tool-supported implementation: Use of digital platforms for structured execution and tracking.

      Our unique selling point: KPMG Trusted AI Framework

      The KPMG Trusted AI Framework is our global approach to responsible, secure, and compliant AI use. It was developed to help companies use AI systems ethically, transparently, and compliantly – in line with international ISO standards, the NIST AI Risk Management Framework, and regulatory requirements such as the EU AI Act. The Trusted AI Framework has a modular structure and integrates several enablers:

      • Control mechanisms for risks with the AI Control Framework
      • Methods and templates for assessments with the Responsible AI Toolkit
      • Overview of regulatory requirements thanks to the Compliance Heatmap
      • Guidelines for organization and processes through the AI Governance Blueprint
      • Assessment grid for risks with the Trusted AI Risk & Control Matrix
      Trusted AI

      AI Security

      Greater security for the financial sector in the age of artificial intelligence (AI)

      Your contacts