The quick spread of artificial intelligence (AI) brings about a plethora of ethical, legal, and liability-related questions. AI Act – the EU regulation that aims to become the world’s first comprehensive regulatory framework for AI – will bring answers to some of them. The regulation draft introduces a categorization system based on the risk, dividing AI into four categories with corresponding responsibilities for the users. The AI Act is expected to be finalized in early 2024 – meaning now is the perfect time to get familiar with the new responsibilities it will introduce under penalty of up to 30 million euros for non-compliance.
The Act aims to ensure security and increase transparency of the quickly developing AI technologies. And since AI is being utilized more and more, we estimate that the new responsibilities will affect thousands of companies that develop or simply use AI. If you are among those who plan to provide AI systems on the EU markets or integrate them into your workflows, you need to prepare for your new responsibilities. What those will be depends on the category that your AI system will fall into.
Unacceptable risk
The AI Act defines several systems that, when used, are a potential risk to human rights, and incompatible with European values. That means using or importing such systems will be outright prohibited in the EU, barring some potential exceptions. If past legal developments are any indication, this category will include systems used for so-called social scoring or remote, real-time biometric identification of persons.
High risk
Import or use of high-risk systems in the EU will be allowed only when they meet the regulatory requirements under the AI Act. Currently, these systems are divided into two subcategories:
I. System is a safety component or a product subject to existing safety standards and third-party assessments in accordance with other EU regulations (machinery, toys, medical devices, etc.)
II. System falls within one of the following eight areas:
- biometric identification and categorization of persons,
- critical infrastructure operation and management,
- education and vocational training,
- employment, workers management, and access to self-employment,
- access to essential services and benefits in both public and private sectors, use of such services and benefits,
- law enforcement,
- migration, asylum, and border control management,
- administration of justice.
The use of high-risk AI systems will bring many new responsibilities that will affect producers, distributors, providers, and users of such systems alike. High-risk systems will have to be designed to automatically log incidents, be transparent in how they operate, and meet a certain level of accuracy and cybersecurity.
You will also need to prepare technical documentation showing your system meets the AI Act requirements before you can bring it to the EU markets. You will also need to meet training data criteria in the case of systems that are trained using data sets.
Limited and minimal risk
This category will only need to meet certain transparency requirements under the AI Act. You will be required to inform users that they are interacting with an AI, like a chatbot.
Foundation models and generative AI
In reaction to ChatGPT and similar tools, new requirements will also cover the so-called foundation models (large AI models trained using enormous amounts of data that can be then used for a wide range of tasks) and generative AI. GPT-3, BERT, or DALL-E 2 are some early examples of such models. The EU proposes that providers of generative AI will be obliged to:
- ensure that users are informed about content generated by AI,
- develop AI models in a way that will prevent the production of illegal content,
- document and publish a detailed overview of copyrighted training data.
Responsible AI will help you stay compliant
The AI Act will introduce steep fines for non-compliance – up to 30 million euros or 6 % of global turnover, making these one of the reasons why we recommend starting preparations now.
If you’re unsure of which risks you might be facing and how to make your internal processes compliant with current and future regulatory requirements, our Responsible AI service is just the right thing for you. We are here to help you map all AI systems used within your company and make sure your processes comply with legal, technical, and ethical requirements. So you can continue using AI responsibly, without unnecessary risks.