EU Artificial Intelligence Act
Today, artificial intelligence (“AI”) systems make a name for themselves with the value they add to the business world and facilitate daily life. However, there are uncertainties about the assessment of the problems that may arise from the use of these emerging systems and the imposition of sanctions. The regulation called the EU Artificial Intelligence Act (“EU AI Act”), agreed by the European Commission and the European Parliament, is of great importance to eliminate uncertainties. In this context, we have compiled for you the remarkable provisions of the act, known as the world's first AI regulation, and related discussion topics.
General Description of Regulation is defined as following; The European Union has regulated the EU AI Act in order to regulate the progress in the field of AI, which has shown great development recently. AI which is used in many areas such as health services, energy, production, communication and research, has benefits as well as risks. The act, which started to be created to regulate these risks and was introduced by the European Commission in April 2021, analyzes AI systems by classifying them according to their risks and determines the regulations to be applied according to these risk levels. The EU AI Act is important because it is the first step taken to regulate AI in the world.
The objectives of the EU AI Act are:
- To ensure that the AI systems offered and used in the EU market are safe and respect fundamental rights and EU values,
- Providing legal certainty to facilitate investment and innovation in the field of AI,
- To increase the effective implementation and supervision of existing regulations on fundamental rights and security requirements applicable to AI systems,
- Facilitating the development of a single market for legal, safe and reliable AI applications and preventing market fragmentation.
The EU AI Act applies to:
- Those who put into service AI systems launched in the EU, regardless of whether they are installed in a third country,
- Those who use AI systems within the EU,
- System providers and users where the outputs generated from AI systems are used within the EU but are located in a third country.
- Unacceptable risky AI systems are systems that pose a threat to humans and have been banned from them. The following AI activities will be considered as unacceptable risk:
- Cognitive behavioral manipulation of people or certain vulnerable groups: toys that encourage dangerous behavior, for example children,
- Social scoring: Classification of people based on behavior, socioeconomic status, or personal characteristics,
- Biometric identification and categorization of people,
- Real-time and remote biometric identification systems, such as facial recognition
- It has been stated that there may be some exceptions that may be allowed for legal purposes in unacceptable risky AI applications. For example, biometric identification systems may be allowed to prosecute serious crimes after court approval.
- It has been decided that AI systems that adversely affect security or fundamental human rights will be considered high risk. High risk is assessed in two different categories:
- I. AI systems used in products subject to the EU's product safety legislation.
- II. AI in certain areas that need to be registered in the EU database:
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and use of essential private services, utilities and benefits
- Law enforcement
- Migration, asylum and border control management
- Support for the interpretation and application of law
- It has been decided that all high-risk AI systems will be evaluated before they are released and throughout their lifecycle.
- Limited-risk AI systems are required to comply with transparency requirements that allow users to make informed decisions. After interacting with the apps, the user can decide whether they want to continue using them. Users should be informed about these issues when interacting AI. This definition of risk includes AI systems that provide services by manipulating image, audio and video content.
- All other AI systems with low or minimal risk will be able to be developed and used in the EU without complying with additional legal obligations. However, the EU AI Act envisages the establishment of a general framework of conduct to encourage voluntary implementation of mandatory requirements for high-risk AI systems.
Sanctions
The EU AI Act stipulates that member states designate one or more competent authorities, including a national supervisory authority, that will be tasked with overseeing the implementation of the regulation. However, it is also planned to establish a European AI Board consisting of representatives of member states and commissions at EU level.
National market surveillance authorities will be responsible for assessing operators' compliance with obligations and requirements for high-risk AI systems. These authorities will have access to confidential information, including the source code of AI systems, and will be subject to a number of binding confidentiality obligations. They will be empowered to take corrective measures to prohibit, limit, withdraw or recall AI systems that do not comply with the EU AI Act or even if they do, pose a risk to the health, safety or fundamental rights of individuals.
In case of non-compliance with the act, administrative fines are provided for in varying scales, depending on the severity of the violation. These fines can be up to €30 million or 6% of the total worldwide annual revenue.
EU AI Obligation Directive
On 28 September 2022, the European Commission published a proposal for the AI Obligation Directive.
This directive is called the "Proposal for a Directive of the European Parliament and of the Council on the Adaptation of Non-Contractual Civil Liability Rules to Artificial Intelligence". It deals with requests for problems arising from AI systems or their use and aims to regulate non-legal liability rules.
The AI Liability Directive complements the EU AI Act in that it introduces a new liability regulation for damages caused by AI.
This directive increases consumers' confidence in AI products and services by providing legal certainty and assists consumers in their claims for compensation for AI-related damages.
For more detailed information, you can find our article on the EU Artificial Intelligence Obligation Directive here.
The Impact of Regulation on Existing Technologies
Under the law, generative AI service providers such as ChatGPT are required to comply with a number of transparency requirements:
- Explaining that the content it provides is produced by AI,
- Protecting the model against producing illegal content,
- Publishing summaries of copyright data used for education.
It has been stated that high-impact general-purpose AI models with systematic risk will have to undergo detailed evaluations and serious situations should be reported to the European Commission. An example of these systems is GPT-4, a more advanced AI model.
Regulatory Updates:
The law was adopted by the European Parliament on 14 June 2023 by a majority vote in the final vote held on 14 June 2023.
On 9 December 2023, the European Parliament reached a provisional agreement with the Council on the AI Act. Apparently, for the text to become EU law, it must be formally adopted by both the Parliament and the Council. This law is a giant step forward in regulating and managing AI technology and aligns the use of AI with the EU's overall digital strategy.
Our Latest Forensic Insights
Follow Us on Linkedin
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia