S rychlým nástupem umělé inteligence (AI) vyvstává spousta otázek ohledně etiky, ochrany práv a odpovědnosti v souvislosti s jejím používáním. Část z nich zodpoví tzv. AI Act – nařízení EU o umělé inteligenci, které má ambici stát se první ucelenou regulací AI na světě. Návrh nařízení dělí AI do čtyř kategorií podle míry rizika, a přiměřeně tomuto riziku stanovuje povinnosti pro celou řadu subjektů. Finální podobu AI Act bychom mohli znát začátkem roku 2024. Zorientujte se včas v povinnostech, které přinese nové nařízení, za jehož porušení hrozí pokuty až 6 % ročního celosvětového obratu. 

The Act aims to ensure security and increase transparency of the quickly developing AI technologies. And since AI is being utilized more and more, we estimate that the new responsibilities will affect thousands of companies that develop or simply use AI. If you are among those who plan to provide AI systems on the EU markets or integrate them into your workflows, you need to prepare for your new responsibilities.  What those will be depends on the category that your AI system will fall into.

Unacceptable risk

The AI Act defines several systems that, when used, are a potential risk to human rights, and incompatible with European values. That means using or importing such systems will be outright prohibited in the EU, barring some potential exceptions. If past legal developments are any indication, this category will include systems used for so-called social scoring or remote, real-time biometric identification of persons.

High risk

Import or use of high-risk systems in the EU will be allowed only when they meet the regulatory requirements under the AI Act. Currently, these systems are divided into two subcategories:

I. The system is a safety component or a product subject to existing safety standards and third-party assessments in accordance with other EU regulations (machinery, toys, medical devices, etc.)

II.  The system falls within one of the following eight areas:

  • biometric identification and categorization of persons,
  • critical infrastructure operation and management,
  • education and vocational training,
  • employment, workers management, and access to self-employment,
  • access to essential services and benefits in both public and private sectors, use of such services and benefits,
  • law enforcement,
  • migration, asylum, and border control management,
  • administration of justice.

The use of high-risk AI systems will bring many new responsibilities that will affect producers, distributors, providers, and users of such systems alike. High-risk systems will have to be designed to automatically log incidents, be transparent in how they operate, and meet a certain level of accuracy and cybersecurity.

You will also need to prepare technical documentation showing your system meets the AI Act requirements before you can bring it to the EU markets. You will also need to meet training data criteria in the case of systems that are trained using data sets.

Limited and minimal risk

This category will only need to meet certain transparency requirements under the AI Act. You will be required to inform users that they are interacting with an AI, like a chatbot.

Foundation models and generative AI

In reaction to ChatGPT and similar tools, new requirements will also cover the so-called foundation models (large AI models trained using enormous amounts of data that can be then used for a wide range of tasks) and generative AI. GPT-3, BERT, or DALL-E 2 are some early examples of such models. The EU proposes that providers of generative AI will be obliged to:

  • ensure that users are informed about content generated by AI,
  • develop AI models in a way that will prevent the production of illegal content,
  • document and publish a detailed overview of copyrighted training data.

Responsible AI is here to help you stay compliant

The AI Act will introduce steep fines for non-compliance – up to 30 million euros or 6 % of global turnover, making these one of the reasons why we recommend starting preparations now.

If you’re unsure of which risks you might be facing and how to make your internal processes compliant with current and future regulatory requirements, our Responsible AI service is just the right thing for you. We are here to help you map all AI systems used within your company and make sure your processes comply with legal, technical, and ethical requirements. So you can continue using AI responsibly, without unnecessary risks.