• 1000

Many companies now rely on generative artificial intelligence (AI), or "GenAI" for short. The applications and services are used to perform repetitive tasks, quickly find and analyse information and automatically create texts and other content.

However, as promising as the potential efficiency gains may be, a hasty introduction of GenAI without considering the legal risks, compliance requirements and ethical standards can quickly have negative consequences: Among other things, there is a risk of reputational damage and financial losses, for example in liability cases.

EU AI regulation: AI Regulation and AI Liability Directive

The EU Commission is endeavouring to establish a legal framework that ensures the responsible and ethical use of AI. The "AI Act" defines detailed requirements for the development, sale and use of AI systems. A central aspect is the risk assessment, which categorises AI systems into four classes (unacceptable, high risk, limited risk and minimal risk). Specific obligations apply depending on the risk level.

High-risk AI systems are subject to strict requirements, including the establishment of comprehensive risk and quality management systems. Violations can result in significant fines. Nevertheless, the regulation requires further clarification of legal terms and interfaces with other laws, which potentially means legal challenges for those affected. In addition, the EU will introduce a liability directive to make it easier for victims of damage caused by AI systems to assert claims for damages.

Identify legal risks

Depending on the type of system and the planned area of application, there are further legal risks and issues that should be examined before using GenAI. Particularly in the case of open systems operated by third parties, there is a considerable risk of unintentional disclosure of trade secrets, intellectual property and personal data through the input of information. The consequences can range from data protection violations and corresponding fines to the loss of trade secret protection and thus cause considerable damage to companies.

A comprehensive analysis is required to identify and minimise existing risks when implementing a system.

The need for AI governance

In view of the regulatory and ethical requirements and other legal risks, it will be essential for companies to establish appropriate AI governance. This will allow processes, responsibilities and limits on the use of AI systems to be comprehensively regulated.

Comprehensive AI compliance consulting from a single source

At KPMG Law, our multidisciplinary team of lawyers and our colleagues from KPMG AG's Digital Compliance and Cyber Security Consulting departments provide you with comprehensive support in meeting the compliance requirements for your AI systems. We work with you to develop the AI strategy that suits your company. We support you in mastering the numerous legal tasks.

Your contact persons

Michael Roth, LL.M. (Stellenbosch)*

Partner
KPMG Law Rechtsanwaltsgesellschaft mbH

+49 69 9511 95770
Email

Dr. Daniel Taraz*

Senior Manager
KPMG Law Rechtsanwaltsgesellschaft mbH

+49 40 360994 5483
Email

Connect with us

* Legal services are provided by KPMG Law Rechtsanwaltsgesellschaft mbH.