woman standing outside

The rising importance of AI regulation

Our essential compliance thoughts on the EU AI Act: How it will affect your European and Swiss business operations.

Artificial intelligence (AI) is transforming industries around the world. To address its growing impact, the European Union has introduced the EU AI Act.

This groundbreaking legislation provides a comprehensive legal framework to safeguard rights, ensure ethical AI practices, and build trust in AI technologies.

For businesses, the Act is more than just a guideline. It is a mandate to comply with the requirements for responsible AI deployment.

As reliance on generative AI and general-purpose AI models grows, organizations must adapt quickly. This guide outlines the key aspects of the EU AI regulation. It offers practical insights for adapting to AI regulations while continuing to innovate. By embracing the AI compliance EU framework, businesses can lead the way in the era of responsible AI.

What is the EU AI Act?

The EU AI Act is the first-ever comprehensive legal framework for regulating artificial intelligence (AI) in the European Union. Proposed by the European Commission, it establishes rules to ensure AI systems are transparent, accountable, and used ethically.

The Act uses a risk-based approach to categorize AI systems. It evaluates their potential impact on individuals and society and outlines specific requirements for each category.

The Act complements existing laws, such as the General Data Protection Regulation (GDPR). It addresses modern challenges, including the rise of generative AI and Large Language Models (LLMs). It aims to create a balanced framework that encourages innovation while safeguarding citizens' rights and ensuring data privacy.

Key components of the EU AI Act

The EU AI Act uses a risk-based approach to classify AI systems into four categories:

  1. Unacceptable risk

    AI systems that threaten fundamental rights, such as government-run social scoring systems, are banned outright.

  2. High risk

    These include systems used in law enforcement, critical infrastructure, or those handling personal data. Developers must follow strict regulations. This includes implementing a risk management system, conducting risk assessments, and ensuring data governance.

  3. Limited risk

    Systems in this category must meet transparency requirements. For example, users must be notified when interacting with AI-generated content, such as chatbots or generative AI models.

  4. Minimal or low risk

    These systems have minimal compliance obligations. However, they must follow basic principles of data governance and ethical AI.

This risk-based framework emphasizes AI accountability measures. It encourages businesses to proactively address risks and comply with the requirements set by the Act.

Why the EU AI Act matters for businesses

The EU AI Act will have a significant impact on organizations in the European Union, however also to organizations based in Switzerland as far they use data of persons domiciled in the EU. 

Businesses must integrate AI responsibly while adhering to this evolving AI regulation. Adapting to the EU AI Act is not just about compliance—it’s about maintaining competitiveness in an era of increased scrutiny.

Which role applies to me?

When operating with high risk AI systems, the EU AI act distinguishes between different roles:

  • Provider/manufacturer: a natural or legal person or public authority, agency or other body that develops an AI system and intends to put it on the EU market.
  • Importer: a natural or legal person located or established in the EU that places an AI system on the market under the trademark of a natural or legal person established outside the EU.
  • Distributor: a natural or legal person in the supply chain, other than the provider or importer, who makes an AI system available on the EU market.
  • Deployer: any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
  • Hybrid: although there is no legal definition in the EU AI Act, an organization can assume different roles, e.g. when it is buying and using existing AI systems but feeds it with its own organizational data. This role can come with different responsibilities which need to be determined on a case-by-case basis.

Key business implications include

  • The need to align AI practices with a stringent legal framework.
  • Establishing robust data governance and risk management systems.
  • Preparing for additional oversight in emerging technologies like generative AI and deep learning.

 

Challenges in complying with the EU AI Act

Complying with the EU AI Act presents several critical challenges that businesses must address:

  • Risk categorization complexity: Accurately classifying AI systems into the correct risk category is one of the most significant hurdles. For instance, high-risk AI systems require detailed assessments, audits, and ongoing monitoring. Businesses working with general-purpose AI models or generative adversarial networks (GANs) often face ambiguity due to overlapping classifications.
  • Proactive risk management: Developing and implementing a comprehensive risk management system can be resource-intensive. Organizations need to identify risks early in the AI lifecycle. They should adopt safeguards to mitigate these risks. Examples include rigorous testing of training data and verifying Large Language Models (LLMs).
  • Integration of compliance across teams: The Act calls for cross-functional collaboration, requiring businesses to establish AI Offices to manage compliance. These teams need to combine legal, technical, and operational expertise to effectively adapt to the Act’s requirements.
  • Operational costs: Complying with the EU AI Act can lead to significant cost increases. This is particularly challenging for small and medium-sized enterprises (SMEs). These costs often include audits, regulatory reporting, and restructured workflows to align with data privacy acts and accountability measures.
  • Balancing innovation and compliance: Striking a balance between fostering innovation and adhering to regulations is a persistent challenge. Businesses developing generative AI models for applications like image generation or natural language processing must carefully navigate the boundaries set by the Act.

How to prepare for the EU AI Act

Key prerequisite is that you build up Digital Trust which will help you to protect all stakeholders’ interests and uphold societal expectations and values. 

To help you thrive with AI, KPMG is offering i.a. you the following services:

  1. Evaluate and confirm your readiness for the EU AI Act, e.g. with our EU AI Act readiness assessment
  2. Strengthen your controls and governance by integrating and extending AI controls and helping you to implement a robust AI risk management framework
  3. Strengthen your AI implementation to avoid causing vulnerabilities to external attacks
  4. Assisting you in identifying and implementing AI use cases
  5. Using AI tool to strengthen your AI security
  6. Providing certification on your AI management system according to the leading ISO 42001 standard, allowing you to publicly demonstrate the trustworthiness of your AI operations
  7. Train your teams on the EU AI Act and its requirements, including providing best practices for ethical AI development

The next phase of the EU AI Act is approaching by February 1, 2025. Organizations must ensure AI literacy across their teams, assess the risks of existing AI systems, and decommission prohibited use cases to comply with the EU AI Act.

To succeed, organizations must go beyond basic compliance. They need to implement a robust AI risk management system, align their processes with the General Data Protection Regulation (GDPR), and develop a culture of accountability to comply with the requirements of the EU AI Act.

Alberto Job

Director, Information Management & Compliance

The future of AI regulations in Europe

The European Commission has positioned the EU AI Act as a foundation for future global standards in AI regulation. Predicted trends include:

  • Heightened scrutiny on high-risk AI systems: Evolving technologies like generative AI and deep learning are expected to face increased regulatory oversight.
  • Global collaboration: Efforts to align with other regions, potentially creating international frameworks.
  • Focus on General Purpose AI Models: Addressing the unique challenges of training and deploying Large Language Models (LLMs) and other General-Purpose AI Models.

For businesses, staying ahead means understanding these trends and adapting AI strategies accordingly.

Conclusion

The EU AI Act represents a critical step in shaping the future of artificial intelligence. By introducing a risk-based approach, it ensures data privacy, promotes AI accountability measures, and protects fundamental rights.

For businesses, understanding and preparing for the Act is crucial. It ensures compliance and helps maintain a competitive edge in the evolving AI landscape.

Ready to ensure compliance and gain a competitive edge?

Navigating the complexities of the EU AI Act requires expert guidance and tailored solutions. At KPMG Switzerland, we offer specialized support in data privacy, trusted AI development, digital innovation and data governance. Our team can help you align your AI systems with regulatory requirements while driving innovation and building trust.

Contact us today to discover how our data privacy and trusted AI services can empower your organization to thrive in the evolving AI landscape.

Meet our expert

Alberto Job

Director, Information Management & Compliance

KPMG Switzerland

Related articles and more information

Unlock Digital Trust

Build and retain trust with your customers.

Artificial Intelligence

Explore AI's impact on industries, trends, and challenges. Get insights on data science, generative AI, and ethical considerations. Stay ahead with our expertise in AI.

How to get started: Your first actions toward Trusted AI

Building trust in AI is crucial as it integrates into business. Let's move toward Trusted AI now.

ISO/IEC 42001: The latest AI management system standard

Unlock Trusted AI by navigating the ISO/IEC 42001 standard. Manage risk and use AI responsibly while balancing innovation, governance, and ethics.