The new law affects companies worldwide. Our new report outlines the details—and the steps companies should be considering today.
Estimated read time: 3-4 minutes
AI continues to get much-deserved attention from business leaders for its transformational potential. But all that leaning-in energy and excitement has also come with one persistent concern from the C-Suite: How will AI be regulated?
That ongoing uncertainty has been flagged as the top barrier to AI adoption in recent-KPMG surveys. But with the passage of the European Union (EU) parliament’s Artificial Intelligence Act (AI Act), many organizations will now start to get some clarity as they navigate the specifics of this first-of-its-kind legislation.
The EU AI Act casts a wide net, affecting any organization that uses AI technology as part of products or services delivered in the EU. The AI Act’s formal approval starts the clock on a series of regulations that will roll out over two years. The new law includes a specific definition of AI, tiered risk levels, detailed consumer protections, and much more, as we outline in our new report, Decoding the EU AI Act.
Here’s a closer look at some of the highlights from our coverage.
As recent history has shown, the EU has a track record of establishing first-mover guidelines on emerging technologies that go on to influence how companies across the world operate. Its passage of the General Data Protection Regulation (GDPR) a decade ago was a major step in determining how organizations manage data privacy and security.
As with GDPR, just about any company that does business in the EU—or simply has customers there—is potentially affected. And as happened with GDPR, even companies who do not work in or have customers in the EU may feel the AI Act’s knock-on effects if they work with companies that are directly affected. So, it’s important for many organizations to understand what’s in the new law.
As a starting point, the new EU law defines AI as:
A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Critically, the EU AI Act also defines four tiers of risk, with related requirements at each level that oblige companies using AI systems to ensure they have appropriate consumer protections in place—or risk regulatory repercussions:
AI applications in this tier are strictly prohibited and include systems in areas like social scoring, real-time biometric identifications, subliminal manipulation and disinformation, and more.
This tier includes systems that create an elevated risk to the health, safety, or rights of consumers. They are permitted, but subject to a high level of mandatory requirements. Examples include recruiting and educational tools, infrastructure management applications, and some biometric classification systems.
These are permitted systems that must adhere to strict transparency regulations. For example, an AI system that interacts with humans must be clearly marked as AI. Or deepfake content is allowed, provided that it is clearly disclosed.
These systems are allowed with very few requirements. They include many technologies widely in use today, such as email spam filters or AI-enabled video games. The big caveat: Organizations will need to ensure these low-risk AI systems don’t move up in risk class as they add new functionality over time.
Even knowing the EU regulations will take 24 months to fully roll out and be enforced, the road ahead will be challenging for many organizations. Yes, the regulatory environment is starting to take shape. But AI applications continue to advance rapidly and add new features that will subject them to new levels of oversight as they grow.
Rather than chasing tactical regulatory requirements, though, we believe the right response for every organization working with AI today starts with establishing an overall AI governance framework for their company. This baseline framework can allow companies to set an overarching AI strategy that will also ensure alignment with the AI Act.
With that framework in place, we believe there are at least eight steps that companies affected by the AI Act can take today to get ready, including:
1
2
3
4
To see our full list of recommended steps, download the full report below.
Explore our guide for business leaders, including recommended steps
How risk and compliance can accelerate generative AI adoption
Harness the power of generative AI in a trusted manner
Building trust in AI is a shared responsibility
Breakthroughs like ChatGPT are taking off, but accuracy is a two-way street.
Unleash the Power of AI Governance: Navigating the EU AI Act and Beyond
A deep dive into the EU AI Act and its implications for AI governance.
Turn insight into opportunity with unique perspectives and actionable insights addressing the burning issues atop the C-suite agenda. Delivered monthly.