Skip to main content

      The EU AI Act, the EU-wide legislation on artificial intelligence (AI), came into force in part on 1 August 2024 and will be fully applicable by 2 August 2026. This regulation establishes a risk-based framework for AI systems across the EU. This is particularly significant for cloud service providers offering machine learning services in high-risk sectors such as healthcare. In this context, the Artificial Intelligence Cloud Service Compliance Criteria Catalogue (AIC4) from the Federal Office for Information Security (BSI) provides a structured assurance framework. Providers can demonstrate the security, robustness and governance of their machine learning services in accordance with the regulatory requirements of the EU AI Act.

      What is the AIC4?

      The AIC4 consists of technical information security criteria developed by the BSI. It is designed to assess the security and robustness of AI cloud services (i.e. cloud systems based on machine learning).

      The AIC4 is an extension of the C5 cloud security criteria with AI-specific requirements and can only be carried out if a valid C5 attestation is in place. It focuses on the secure use of machine learning in cloud services, with compliance with the AIC4 criteria being verified through standardised audits and documented in a report to assist cloud customers in assessing the security and reliability of AI systems.

      How does the AIC4 enhance compliance and security beyond the requirements of the EU AI Act?

      The synergy between AIC4 and the EU AI Act offers a highly effective combination of regulatory compliance and practical security. The EU AI Act. AIC4 puts into practice many of the security-focused aspects relevant to high-risk AI systems, particularly in cloud service contexts.

      The EU AI Act is a regulation that establishes a common legal framework for AI systems across the EU. It is the first comprehensive AI law aimed at ensuring that the use of artificial intelligence is safe, trustworthy and in line with EU values and fundamental rights. The Act categorises AI systems according to their level of risk and sets out corresponding obligations.

      The law applies to all AI systems placed on the European market, regardless of their origin, and uses a risk-based approach to regulating artificial intelligence. Systems classified as posing ‘unacceptable risks’ are prohibited. High-risk systems are subject to strict requirements, systems with limited risk must meet transparency obligations, and systems with minimal risk are largely unregulated.

      For high-risk AI, providers must implement measures such as risk management, documentation, human oversight, data governance and conformity assessments.

      This distinction is also reflected in the areas covered by each framework. The EU AI Act operates at a regulatory and legal level, defining responsibilities, obligations and market access conditions for providers and users of AI systems within the EU. It applies across sectors and use cases. 

      In contrast, the AIC4 operates at the technical and security levels. It is specifically designed for cloud-based AI services and focuses on the secure implementation and operation of machine learning systems. It provides concrete, verifiable criteria for security throughout the entire AI lifecycle. 

      Through an AIC4 assessment, providers can present independent security evidence, which is valuable for building customer trust and for risk assessments and readiness for regulatory audits. As many AI systems covered by the Act are provided as cloud services, the AIC4 offers a standardised framework for assessing security, robustness and data management that complements the requirements of the Act.

      Essentially, the EU AI Act defines what needs to be achieved, whilst the AIC4 demonstrates how this can be implemented, thereby increasing confidence among customers and regulatory authorities alike.

      Not compliant yet? Here’s what to do next

      Cloud service providers operating in high-risk sectors such as healthcare, whose AI services do not yet hold BSI-C5 or AIC4 certification, are currently under significant pressure to act. Delaying action could limit market opportunities – and even result in exclusion from tenders or jeopardise existing contracts if customers demand proof of fully tested, secure and compliant AI systems.

      Our experts Andreas Steffens and Patrick Stadler can assist you in preparing for or obtaining AIC4 certification. 

      More KPMG insights

      Your contacts