A guide for business leaders
The proposed European Union (EU) Artificial Intelligence (AI) Act is one of the leading proposed AI regulations in the world. It provides a robust regulatory framework for AI applications to ensure user and provider risk and compliance. And it will likely reshape the market landscape for AI, just as the EU’s General Data Protection Regulation (GDPR) did for data privacy in the last decade.
AI has the potential to deliver new value streams and transform business models, making it an important competitive advantage for businesses across industries. However, business leaders cite concerns about the regulatory landscape as the #1 barrier to AI adoption.1 That makes understanding the AI Act important for more than just risk and compliance reasons: it’s crucial for the future of your business.
Decoding the EU AI Act: What the new Act means—and how you can respond
For organizations operating in the EU, understanding the tenets behind this regulation is important. Our new article explores key provisions and obligations within this Act while demonstrating how KPMG's Trusted AI framework can effectively partner with companies, ensuring responsible AI deployment and risk and compliance.
Download PDF1
Who will be impacted?
The EU AI Act applies to any provider placing an AI product or service within the EU, and all users of those products and services in the EU.
2
What’s in the Act?
The EU AI Act is a complex, 108-page document. It defines artificial intelligence; proposes consumer protection for the users of AI products; creates an AI risk framework and sets requirements for high-risk systems; and establishes transparency rules for AI systems.
3
When will it go into effect?
The EU AI Act is expected to be finalized in 2024. Afterwards, organizations will have a 24-month transition before it becomes fully enforced.
The EU AI Act creates four risk categories for AI products and services. How a product or service is categorized depends on the data it captures and the decisions or actions that are made with that data.
High-risk systems must comply with additional requirements regarding:
High-risk systems include biometric identification and classification, employment selections, government benefits, and law enforcement and judicial processes.
The EU’s goal for this for this legislation is to ensure that AI systems are “safe, transparent, traceable, non-discriminatory and environmentally friendly.”2 Those priorities are shared by the KPMG Trusted AI framework to help design, build, deploy, and use AI solutions in a responsible and ethical manner while also accelerating value.
Kickstart your journey to compliance by taking these steps
View our infographicMany aspects of the EU AI Act will be challenging to implement. Though the law is not expected to be finalized until later in 2024, the road to risk and compliance starts now. Our new infographic outlines eight ways business can take action today.
How risk and compliance can accelerate generative AI adoption
Harness the power of generative AI in a trusted manner
Governing AI responsibly
Discover how risk professionals can develop an effective AI governance model
Where will AI/GenAI regulations go?
Demonstrating 'trusted AI systems'