Decoding the EU AI Act
A guide for business leaders

What the new Act means—and how you can respond
The proposed European Union (EU) Artificial Intelligence (AI) Act is one of the leading proposed AI regulations in the world. It provides a robust regulatory framework for AI applications to ensure user and provider risk and compliance. And it will likely reshape the market landscape for AI, just as the EU’s General Data Protection Regulation (GDPR) did for data privacy in the last decade.
AI has the potential to deliver new value streams and transform business models, making it an important competitive advantage for businesses across industries. However, business leaders cite concerns about the regulatory landscape as the #1 barrier to AI adoption.1 That makes understanding the AI Act important for more than just risk and compliance reasons: it’s crucial for the future of your business.
Dive into our thinking
Decoding the EU AI Act: What the new Act means—and how you can respond
For organizations operating in the EU, understanding the tenets behind this regulation is important. Our new article explores key provisions and obligations within this Act while demonstrating how KPMG's Trusted AI framework can effectively partner with companies, ensuring responsible AI deployment and risk and compliance.
Download PDFThe EU AI Act at a glance
1
Who will be impacted?
The EU AI Act applies to any provider placing an AI product or service within the EU, and all users of those products and services in the EU.
2
What’s in the Act?
The EU AI Act is a complex, 108-page document. It defines artificial intelligence; proposes consumer protection for the users of AI products; creates an AI risk framework and sets requirements for high-risk systems; and establishes transparency rules for AI systems.
3
When will it go into effect?
The EU AI Act is expected to be finalized in 2024. Afterwards, organizations will have a 24-month transition before it becomes fully enforced.
A new framework for understanding AI risk
The EU AI Act creates four risk categories for AI products and services. How a product or service is categorized depends on the data it captures and the decisions or actions that are made with that data.
High-risk systems must comply with additional requirements regarding:
- Data governance
- Technical documentation
- Transparency
- Human oversight
- And more
High-risk systems include biometric identification and classification, employment selections, government benefits, and law enforcement and judicial processes.
The EU’s goal for this for this legislation is to ensure that AI systems are “safe, transparent, traceable, non-discriminatory and environmentally friendly.”2 Those priorities are shared by the KPMG Trusted AI framework to help design, build, deploy, and use AI solutions in a responsible and ethical manner while also accelerating value.
The EU AI Act: 8 actions to take now
Kickstart your journey to compliance by taking these steps
View our infographicKickstarting the EU AI ACT risk and compliance Journey
Many aspects of the EU AI Act will be challenging to implement. Though the law is not expected to be finalized until later in 2024, the road to risk and compliance starts now. Our new infographic outlines eight ways business can take action today.
Explore more

An executive’s guide to establishing an AI Center of Excellence
How to develop a dedicated group within your business to help bring AI and automation initiatives to fruition.

How risk and compliance can accelerate generative AI adoption
Harness the power of generative AI in a trusted manner

Governing AI responsibly
Discover how risk professionals can develop an effective AI governance model

Where will AI/GenAI regulations go?
Demonstrating 'trusted AI systems'
Meet our team


