A guide for CISOs and other business leaders
Ratified in March 2024, the European Union’s new Artificial Intelligence Act (EU AI Act) is poised to set a new standard for AI regulation and enforcement worldwide. The chief information security officer (CISO) is a key senior role that will be most affected by the EU AI Act, as it will impact how businesses develop and deploy AI technologies and secure their data.
As Generative AI (GenAI) fundamentally transforms value streams and business models, understanding the regulation is important for more than just compliance reasons— it’s essential for businesses to continue innovating and remaining competitive in their industries today. But the EU AI Act is a complex 108-page document with multiple subsections.
This article breaks down the key information, including what the Act covers and the important definitions, provisions, and protections it establishes. We also explore what your business can do now to proactively prepare, including how the CISO can help ensure responsible AI deployment and compliance.
The EU AI Act has wide-ranging impacts on any business that operates in the EU and offers AI products, services, or systems that can be used within the EU.
The Act provides a robust regulatory framework for AI applications to ensure user and provider compliance. It also defines AI and categorizes AI systems by risk level while outlining requirements for safe and responsible use.
Applicable parties for this regulation include:
For one, the EU AI Act is applicable to many U.S. companies, potentially even including those with no physical EU presence. CISOs of U.S. companies that provide services of any kind to the EU need to evaluate how this act applies and take steps to comply if it does.
The following may make your company required to comply with the EU AI Act:
Your company directly operates, sells or offers services within the EU, including through e-commerce or digital platforms.
Your AI technologies are part of the products or services integrated or sold by EU-based companies.
Your AI systems process data concerning EU residents.
In addition, the EU AI Act may ultimately influence the final language of numerous proposed AI regulations. Although it is perhaps most well-known and far-reaching rule, the EU AI Act is part of a wider trend of rising regulatory guidelines for AI globally. As it is put into practice, it is likely policymakers around the world will look to the EU AI Act as an example and seek at least some level of alignment with its perspective on key topics such as safety, security, privacy, governance, compliance, and fairness, transparency and trustworthiness.
This includes several proposed AI regulations in the U.S., which does not currently have overarching federal rules or penalties related to AI usage, but where regulators are looking to merge sector-specific rules and frameworks and establish a more holistic approach:
The official definition within the regulation encompasses:
The EU AI Act established product safety regulations in the same vein as the US Food and Drug Administration (FDA) issued Food Code, which contains recommendations on the safe handling and storage of food for American consumers.
Similarly, the regulation ensures that EU citizens are safe from intentional or unintentional harm caused by AI products and services.
The EU AI Act creates a framework for understanding the risk associated with AI and setting requirements for high-risk systems. All products and services will fall into one of four risk categories depending on the data that they capture and the decisions or actions that are made with that data.
AI systems that are deemed to have unacceptable risk violate the fundamental rights of the consumer and are prohibited. Examples: social scoring; manipulation through subliminal techniques; exploitative practices; real-time biometric identification systems.
High-risk AI systems create an elevated risk to the health and safety or rights of the consumer. Such systems are permitted on the EU market subject to compliance with certain mandatory requirements and a conformity assessment. Examples: biometric identification and classification; management and operation of critical infrastructure; educational institution selections; employment selections (recruiting); government benefits and immigration status; law enforcement and judicial processes.
AI systems that require transparency requirements are not mutually exclusive with either high- or low-risk AI systems. The four transparency requirements are: 1) AI systems that aim to interact with people must be designed and developed in a way that makes it obvious that it is an AI system; 2) Users of an emotion recognition system or a biometric categorization system must inform anyone they plan to use it on; 3) Deepfake content must always be disclosed; and 4) Requirements 1, 2, and 3 do not impact the requirements defined for high-risk systems.
Though low-risk systems are permitted without restriction, organizations should continue to monitor their AI systems periodically for changes and enhancements particularly when adding functionality aimed to interact with human emotions or characteristics through automated means.
Implementing an AI risk management system is not a one-size-fits-all exercise. Existing models or frameworks that were built for traditional risks might not—and probably do not—apply to all the generative AI risks facing your organization. Everything up to and including principles and policies may need to be updated, and appropriate governance support will be required.
Many aspects of the EU AI Act will be challenging for organizations to implement and address, especially in terms of technical documentation for the testing, transparency, and explanation of AI applications. Adding to this challenge is that every AI application comes with its own business processes, impact, and risks.
Though there is no silver bullet for compliance, every business can kick-start its journey to EU AI Act compliance by taking these immediate steps , with CISOs collaborating with other business leaders as a key strategic and technical voice.
Review existing AI applications and categorize them to identify high-risk applications that require compliance with the EU AI Act. Leveraging an automated detection/identification solution, automating intake questionnaires, or implementing a workflow platform, for instance, can aid in accelerating the discovery, inventory, and classification activities required to support and map compliance obligations.
Implement standards and best practices for AI model development, deployment, and maintenance in alignment with the EU AI Act’s requirements and other emerging regulatory standards, and ensure scalability. Leveraging an automated solution to manage various aspects of compliance mapping, obligations tracking, and workflow management can aid in supporting and scaling various governance activities.
Conduct a thorough gap analysis to identify areas of noncompliance and develop an immediate action plan to address these gaps. This analysis could be expedited using an automated or rapid AI assessment approach against established governance framework or EU AI Act compliance obligations.
It’s clear the CISO and security team will play a crucial role in GenAI governance, management and monitoring, helping enabling businesses to adapt to the challenges of the EU AI Act and accelerate value from this exciting emerging technology. For further insights and advice for CISOs on GenAI deployment and operations, check out our latest thought leadership.
KPMG generative AI survey report: Cybersecurity
An exclusive KPMG survey examines four areas where this remarkable technology shows great promise.
Is my AI secure?
Understanding the cyber risks of artificial intelligence to your business
AI security framework design
KPMG AI Security Services