Understandably, multinational organizations are eager to take advantage of AI tools and the business breakthroughs that they offer. The EU AI Act makes it clear that new rules and regulations have been instituted and it is likely that additional legal requirements will be enacted by many other jurisdictions. Organizations are advised to utilize the newly enacted EU AI Act as a useful blueprint to start creating internal systems to not only ensure they are following the EU AI Act’s guidance but are also developing internal systems that can quickly respond to any new rules that arise. As the use of AI is relatively new in a business context, organizations can develop their own innovative systems that create systems that are not only compliant but also optimal.
The European Union (EU) recently passed the EU Artificial Intelligence Act (EU AI Act) with the intent of creating coherent and responsible rules governing the use of AI. As with privacy and data regulation under the rules known as the General Data Protection Regulation (GDPR), the EU appears to hope that this initial framework will provide a global precedent for AI regulation. While the EU AI Act primarily applies to organizations directly involved in the creation and deployment of AI systems in the EU, it also applies to organizations that sell, import, distribute or plan for their AI products to be used inside the EU.
In other words, the EU AI Act applies to non-EU-based organizations if their AI tools will find their way into the EU. Therefore, any organization which has a business connection to the EU should have a comprehensive understanding of these rules and how these rules affect them currently and in the future.
Generative AI tools are being rapidly developed and deployed; almost every organization is trying to determine how best to utilize AI systems to enhance their strategic business goals. As global organizations begin to integrate AI into their business models, they should understand and respond to existing and developing global regulations. Integrating AI systems and understanding EU and other regulations should be conducted simultaneously to help ensure that the correct tools are being utilized for a business’s needs, while at the same time adhering to an evolving regulatory landscape.
What types of organizations and activities are covered?
While there will certainly be additional clarification in the coming years around the implementation of the EU AI Act, in the meantime, it provides useful guidelines for global organizations concerning risk allocation and AI adoption.
Who is covered?
No one would be surprised to find out that major high-tech companies that are developing their own AI systems are the primary object of these new rules. However, under the EU AI Act, these new regulations apply to providers, manufacturers, importers, distributors, and deployers of AI systems. Many non-tech organizations that use AI will most likely find themselves falling under the definition of a “deployer”, which is defined broadly by the Act as “any natural or legal person…using an AI system under its authority…” In addition, the EU AI Act applies to providers and deployers of AI systems that are established outside the EU “to the extent the output produced by those systems is intended to be used in the Union.” An example of this might be using a global organization based in the UK using an AI system created in the US to draft a contract sent to an EU entity. This type of activity will be regulated by the EU AI Act. Organizations with a global business model should be fluent in the EU AI Act’s requirements.
What types of activities are covered?
The EU AI Act creates a multi-tiered risk system that ranges from “unacceptable risk” to “minimal risk”. Certain other activities are also explicitly regulated, such as requiring that all artificially generated or manipulated multimedia content be explicitly labeled.
- Unacceptable risk
Certain AI systems that are perceived as a clear threat to the safety, livelihood, and rights of individuals pose an unacceptable risk and will be prohibited. Prohibited AI systems and processes currently include those that manipulate behavior, scrape untargeted facial images or target individuals based on certain behaviors, socioeconomic factors or stereotypes. (There may be some law enforcement exceptions for the use of biometric identification). An example of a prohibited system is one that would use race to predict behavior. - High risk
This category focuses on how AI is utilized in certain products. High-risk products might include medical devices, aviation-related security products and certain automotive products. The EU AI Act provides additional guidance by describing certain high-risk use cases. For example, it would be considered high-risk to use AI in biometric identification systems, credit-worthiness evaluations, and certain HR situations. According to the EU AI Act, anyone utilizing high-risk products needs to take certain precautions, including developing a risk management system that incorporates data governance, documentation and transparency rules. - Limited Risk
AI systems that do not pose a “high” or “unacceptable” risk under the EU AI Act, but which do interact with individuals, are subject to limited transparency obligations. An example of a limited risk AI interaction could include a conversation with a chatbot. - Minimal or no risk
An AI system that does not fall into any of the above risk categories is deemed to pose only a minimal or non-existent risk, and therefore is not subject to the regulations of the EU AI Act. However, it is possible that privacy or other laws may still apply to these AI interactions.
What are the consequences?
The newly created EU AI Office will oversee the implementation and enforcement of the EU AI Act. The consequences for noncompliance are significant. Depending on the violation and the organization’s revenues, fines can range from to €7.5 million or 1.5 percent of global revenue to €35 million or 7 percent of global revenue.
Next steps
As you develop your AI strategy, it is important to do so with the understanding that the EU AI Act is likely to provide a blueprint for future compliance rules. While non-EU countries are likely to develop their own independent sets of regulations, organizations should both understand the EU rules and develop an elastic framework that can quickly respond to this rapidly evolving technology. The EU AI Act becomes law 20 days after publication in the Official Journal of EU, which occurred on May 21, 2024. Most of its provisions will become applicable two years after the EU AI Act’s official enactment. Obviously, the state of AI may change dramatically in the next two years. However, there are certain basic steps that can be taken now to help ensure that your organization can more easily respond to the EU AI Act, any additional rules, and any changes in the AI landscape.
Now that the basic framework of the EU Act is in place, it is a good idea to develop a compliance AI strategy, in addition to a business-focused AI strategy. In other words, AI compliance should be integrated into your AI strategy.
- The first step should be a thorough analysis of the current state of your organization’s AI systems. What types of AI are you using and what types are you planning to use in the near future? You should have a solid understanding of your organization’s AI strategy and how it supports your business needs and how it might evolve over time. You should understand how your AI is being used by all constituents in your organization. Do you have international branches or customers? Obviously, large multinational organizations that are early adopters of AI will require a comprehensive understanding of how their AI systems are being used and what risk categories their AI systems fall into. Assessing your AI systems and strategies and their corresponding risk profiles is the first step.
- Next, it is advisable to set up a formal governance system. There should be a devoted team of senior employees who are responsible for constantly monitoring how AI is being used, especially as new AI systems and AI processes and products are being introduced. This group should be tasked with staying updated on all global AI rules, as well as any domestic AI regulations and interpretations to existing rules. Organization-wide education should be part of this governance system. All employees with access to any AI system used (including an AI system that is created by a third party) should understand how the system works, what restrictions are in place and how to use it in a way that complies with both government and organizational rules. Organizations are also advised to create an in-house hotline where questions can be answered. In addition, there should be a process in place to deal with any violations that have occurred. This process can mimic any process that the organization has established to track and deal with any other type of intentional or unintentional misconduct.
- A monitoring system should be established. Organizations should create a meaningful set of key performance indicators (KPIs) that help to evaluate if all elements of their governance system are working. Do employees understand how AI is being used? When a customer or division in a new jurisdiction with additional or different rules is added, is the AI system quickly able to adapt to that jurisdiction’s requirements? Are violations being addressed and reported promptly? Are the key employees updated on any new rules in a timely basis, and is there an internal and/or external team available to help answer AI-related compliance questions?
- It is imperative that organizations design and monitor processes to help ensure that they are complying with all applicable laws, including the newly enacted EU AI Act. However, organizations should also consider taking an additional step and seek to develop internal guidelines that go beyond meeting the minimum compliance requirements. Ideally, organizations should seek to become model citizens when it comes to using AI systems; they should seek to develop the most ethical set of internal rules. By taking a stand as a leader in ethical AI usage, organizations have the chance to distinguish themselves as leaders in an area where uncertainty is a defining factor and public distrust is a serious concern.