• Thierry Kellerhals, Director |
  • Michael Wellner, Expert |

The EU AI Act sets new rules, categorizing systems by risk to balance innovation and ethical use. It doesn't ban any technology, but seeks to ensure safety without stifling progress. Swiss firms integrating into the European market must comply, yet can gain by pioneering responsible AI development.

In the dynamic landscape of artificial intelligence (AI), the European Union (EU) has introduced a groundbreaking regulation: the EU AI Act. This law represents the first comprehensive global framework for AI governance, raising vital questions for Swiss companies about its impact and the potential to stifle innovation.

The EU's regulatory approach often influences standards beyond its borders, affecting global markets and tech ecosystems. Deeply integrated in European and international markets, Swiss companies must navigate these regulations to ensure smooth operations and access to these markets. Compliance with the EU AI Act is not merely a matter of adhering to regulations; it's also about aligning with global AI norms, as the EU plays a role in setting some of the world’s most stringent AI guidelines. This strategic alignment simplifies global operations by meeting multiple international standards.

Across the AI sector, from startups to established giants, there's a shared concern that stringent regulations could slow the pace of innovation. There's a fear that such regulatory measures could introduce barriers that increase costs, complicate compliance and stifle the spirit of innovation. For transformational technologies such as ChatGPT that are widely used, it is feared that regulation will limit their growth and evolution. Startups worry that the burden of compliance will put them at a disadvantage compared to well-resourced, established companies.

The EU Act categorizes AI systems based on their level of risk (unacceptable, high, limited and minimal risk). This categorization is pivotal as it aims to prevent harmful uses of AI while not unduly burdening less risky, innovative AI endeavors. These risk levels are derived primarily from the purpose and the potential impact of an AI system on individuals and the society, rather than the underlying technology.

While each level is subject to a different set of restrictions and enforcement, no technology is prohibited per se. Some technologies make it more complex, but not impossible, to ensure compliance for high-risk use cases. These include neural networks, whose decisions are difficult to explain and analyze. Since the AI Act does not prohibit any kind of technology for a use case, it is future prove and allows further innovation of AI technologies.

The recent popularity of General Purpose AI (GPAI) models such as ChatGPT also influenced the final version of the law. While these technologies didn’t exist at the time of the initial draft of the EU AI Act, they have established a new technology paradigm where the risk-based approach is not applicable. The EU now categorizes General Purpose AIs as "Conventional GPAIs" and "systemic-risk GPAIs”. Conventional GPAIs are subject to minimal documentation requirements while systemic-risk GPAIs are subject to more rigorous oversight. This distinction ensures that GPAI models, which have become integral to various applications, remain governed by a framework that encourages innovation while ensuring accountability and safety.

The EU AI Act classifies GPAI models as a “systemic risk” if they have "high-impact capabilities" that match or exceed the most advanced models such as GPT-4, or if they are classified as such by the EU Commission, particularly those requiring more than  10^25 Floating Point Operations (FLOPs) for training. Given that many use cases can be realized with simpler models than GPT-4, and that the additional regulatory requirements are a burden on GPAI providers, these regulations don’t have the potential to slow down the pace of innovation in the industry. However, updates may be needed as computing power advances, and future developments in Quantum AI could challenge existing regulations.

The Act's treatment of GPAI illustrates the EU's commitment to fostering innovation alongside responsible AI development. By categorizing GPAIs based on their potential impact, the regulation adapts to the evolving AI landscape and ensures that governance keeps pace with technological advances.

Despite the potential challenges, the benefits of the EU AI Act are significant. It promotes better governance and safer use of AI, providing a solid foundation for companies looking to transform their operations with AI. This is particularly important in sectors such as healthcare and finance, where the law provides clarity and reduces uncertainty, enabling companies to pursue AI-driven innovation with confidence.

For Swiss companies, complying with the EU AI Act isn’t just about meeting regulations but positioning at the forefront of responsible AI development, reflecting commitment to ethical standards and safety.

In conclusion, the EU AI Act doesn't mean the end for AI innovation or tools like ChatGPT. Rather, it’s the beginning of a new chapter in the history of AI, one in which innovation is coupled with responsibility, safety and ethics. The future of AI is not just about what technology can do, but about doing it right.