The 5 As in AI: A comparative review of the EU AI act and the ASEAN AI guide

Navigating AI
swirl building sunset

On 21 May 2024, the world’s first major law regulating artificial intelligence, the EU Artificial Intelligence Act, was passed. While the law is meant for EU member states, its implication is far-reaching. The EU AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the perceived threats they may pose to society.

In ASEAN, the region is still developing its AI strategies and governance framework. The Guide to AI Governance and Ethics released in February 2024 sets out guidelines for ASEAN Member States to follow as they develop their AI systems and regulations.

In this article, Hanim Hamzah, KPMG Law’s Asia Pacific Regional Leader for Legal Services, reflects on EU and ASEAN regulations and the outlook for AI governance in ASEAN. 

Introduction
 

As part of its digital strategy, the European Union (“EU”) passed a landmark regulatory framework aimed at ensuring the safe and ethical deployment of artificial intelligence. The EU Artificial Intelligence Act (“EU AI Act”) intends to create coherent and responsible rules governing the use of AI. This legislation is poised to reshape the global landscape of technological governance.

In Southeast Asia, the Association of Southeast Asian Nations (“ASEAN”) has released its Guide to AI Governance and Ethics ("ASEAN AI Guide”) which takes a non-binding approach to AI development in the region. The AI Guide sets out guidelines for Member States to follow as they develop their AI systems and regulations.

In this article, we will be looking at both the EU AI Act and the ASEAN AI Guide and discuss on how they fare against each other and what the Asia Pacific region can look forward to in AI regulations. 
 

The 5As in AI

1. Autonomy
 

Both the EU AI Act and the ASEAN AI Guide aims to regulate the extent to which AI systems can operate independent of human oversight, especially in critical areas like healthcare, finance, and law enforcement. While the EU AI Act defines systems as machine-based systems that can, with some level of autonomy, process inputs to generate outputs that influence people and environments, the primary focus of the EU AI Act is to strengthen regulatory compliance and drive transparency and accountability in how AI systems are developed and deployed. The EU AI act further establishes boundaries for autonomous decision-making to prevent scenarios where AI systems act beyond the intended scope of their programming. 

In contrast, the ASEAN AI Guide is meant to serve as a practical guide in designing, developing, and deploying traditional AI technologies for the Member States. It establishes common principles and recommends best practices in implementing AI in the region. The ASEAN AI Guide takes a non-binding approach to AI development with the guidelines to be used by governments and organizations in developing and using AI systems.

2. Access
 

The EU AI Act primarily applies to organizations directly involved in the creation and deployment of AI systems in the EU.  Its reach also extends to organizations selling, importing, distributing, or planning for their AI products to be used inside the EU. Thus, any organizations that has a business connection with EU, should have a comprehensive understanding of the rules and its implications. US and global companies that use AI anywhere could be subject to the EU AI Act if the output of the system is used inside the EU.  For example, if a US company uses AI tool to gauge its marketing reach in the EU, that means the output is used in the EU and the EU AI Act will apply.

The ASEAN AI Guide that is confined to Member States is a matter of guidance framework rather than an act of law.  It serves to provide guidelines to ensure AI technologies comply with existing laws and regulations within Member States and it encourages Member States to harmonize their AI regulations to facilitate cross-border cooperation and trade.

3. Ambit
 

The EU AI Act with its staggered timeline of application sets forth transition periods for various requirements ranging from 6-24 months. It consists of 13 chapters, each addressing the different aspects of AI regulation. For each AI system being developed or used, companies will need to assess several aspects, such as their role (provider, deployer, importer or distributor), their type (general purpose AI or not), whether there is systemic risk (for general purpose AI) and their levels of risk.  To elaborate, the EU AI Act creates a multi-tiered risk system that establishes obligations for providers and users depending on the different level of AI risk.

Unacceptable Risk

 

AI systems that are perceived as a clear threat to the safety, livelihood, and rights of individuals pose an unacceptable risk. These systems will be prohibited though some exceptions may be allowed for law enforcement.

They include:

•       Cognitive behavioral manipulation of people or specific vulnerable group (such as voice-activated toys that encourage dangerous behavior in children)

•       Social scoring (classifies people based on behavior, socio-economic status, or personal characteristics).

•       Real-time and remote biometric identification systems, such as facial recognition.

High Risk

 

 

AI systems that negatively affect safety or fundamental rights. These systems are permitted but must comply with multiple requirements and undergo a conformity assessment.

Such systems include those used in critical infrastructure, education, and employment.

Limited Risk

 

AI systems that do not pose a “high” or “unacceptable” risk, but which interact with individuals, are subject to limited transparency obligations. Examples of limited risk could be conversation with chatbots.

Minimal Risk

 

An AI system that does not fall into any of the above categories is deemed to pose only a minimal or non-existent risk. It is therefore not subjected to regulations under the EU AI Act. Nonetheless, other law or regulations may still apply to these AI interactions. 

In relation to the ASEAN AI Guide, rather than distinct chapters classifying different levels of risks, seven guiding principles are laid out in the ASEAN AI Guide to ensure trust in AI and the design, development, and deployment of ethical AI systems having consideration of broader societal impact. Here are the seven principles:

Transparency and Explainability

Transparency refers to providing disclosures on when an AI system is being used and its involvement in decision making, while explainability is the ability to communicate the reasoning behind an AI system’s decision in way that is understandable to all relevant stakeholders. These principles are expected to build trust in ensuring that users are made aware of AI technology, how their information from the interaction is used and what decisions are made using such information.

Fairness and Equity

To ensure fairness, Member Stats are  encouraged to have measures in place to ensure that decisions made by the algorithm do no further exacerbate or amplify existing discriminatory or unjust impacts across the different demographics. Further, datasets that have been used to train the AI systems should be diverse with appropriate measures taken to mitigate potential biases during data collection and processing.

Security and Safety

Impact or risk assessment should be conducted by Member States to identify and mitigate any risks that may arise from an AI system. This to ensure the safety of developers, deployers, and users of AI systems while ensuring that the security of the system includes mechanisms against malicious attacks specific to AI. This includes tampering of datasets, malware, and protection against any attacks that is designed to reverse engineer personal data used to train the AI.

Robustness and Reliability

AI systems should be sufficiently robust to cope with errors during execution and any unexpected or erroneous input. Rigorous testing should be carried out before the deployment of any AI system to ensure robustness and consistent results across a range of situations and environments.

Human-centricity

AI systems should respect human-centered values and pursue benefits that are important for the society at large. This is more important where the systems are used to make decisions about human or aid them. It is imperative that these systems are designed with the human benefit in mind and do not take advantage of vulnerable individuals.

Privacy and Data Governance

AI systems should have proper mechanisms to ensure data privacy and protection are in place to maintain and protect the quality and integrity of data throughout their entire lifecycle. As such, data protocols should be set up to govern who can access the data and when can the data be accessed. The data lifecycle (way it is collected, stored, generated, and deleted) must comply with applicable data protection laws, data governance legislation, and ethical principles.

Accountability and Integrity

Deployers of AI systems should be accountable for decisions made by the system and ensure compliance with applicable laws and respect for AI ethics and principles. Organizations should adopt clear reporting structures for internal governance, setting out clearly the different kinds of roles and responsibilities for those involved in the AI system lifecycle.

4. Accountability
 

While the EU AI Act takes a more risk-based approach with compliance and enforcement regulations, the ASEAN AI Guide is more of a best practice approach with adoption being voluntary. This is as EU is at the more advance stage in regulations while ASEAN is still setting up the approach to AI regulation with Member States at varied levels of regulatory maturity and readiness for AI technologies.

There is no specific section on non-compliance or breaches for accountability under the ASEAN AI Guide.  The EU AI Act however expressly states non-compliance with the EU AI Act to have significant consequences for organizations. The EU AI Office, which has been created to oversee the implementation and enforcement of the EU AI Act, can impose penalties ranging from €7.5 million or 1.5% of global revenue to €35 million or 7% of global revenue.

5. Agility
 

Despite that the EU AI Act sets forth express penalties for non-compliance, we must appreciate the language agility in which both the EU AI Act and the ASEAN AI Guide are written.  They both set a clear tone whilst having regard to the fast-changing development of AI.  Having the ability to quicky adapt to changing conditions and new data, any laws, regulation or guidance must be flexible in enabling businesses to respond swiftly to market changes, emerging trends and new opportunities – against balancing the need to protect rights and liberties, promote social welfare, offer best practices with the overall purpose of maintaining social order and adapt to societal needs.

How will EU AI Act influence ASEAN AI regulations
 

The EU AI Act has potential to be a blueprint for future AI compliance rules in the region. While the ASEAN AI Guide targets traditional AI systems such as popular search engines and voice assistants, it does not cover generative AI. The EU AI Act clearly stipulates which obligations fall on providers of AI systems and thus helping businesses purchasing AI understand terms and conditions that will need to be amended to reflect the different obligations.

Many Southeast Asian countries are not yet heavily involved in producing AI-based systems and have either launched or in the process of developing national AI strategies and governance frameworks. The ASEAN AI Guide sets out best practices in developing and using AI systems and aligning policy coordination. As part of the digital transformation in the region, ASEAN adopted the ASEAN Digital Masterplan 2025 in 2021 which envisions ASEAN as a leading digital community and economic bloc. The plan emphasizes importance of digital transformation and while not entirely focused on AI, does highlight the need for a harmonized digital policies and regulatory frameworks.

Today, Singapore is a leader in AI regulation within the region. In January 2019, the Infocomm Media Development Authority developed the “Model AI Governance Framework” that provides detailed guidance on responsible AI deployment.

In Thailand, the “Artificial Intelligence Ethics Guidelines” was issued in October 2019.  These guidelines set out principles for the ethical use and deployment of AI across various sectors, ensuring that AI technologies are developed and implemented responsibly and ethically within the country. 

In Malaysia, the “National AI Framework” was launched in 2022 by the Government of Malaysia to support the ethical and responsible use of AI technologies. The framework, part of Malaysia’s National Artificial Intelligence Roadmap for 2021-2015, underscores Malaysia’s strategic commitment to ethical AI, aiming to harness AI’s transformative potential while ensuring fairness and transparency in its deployment.

At the time of writing this review, we are unaware of any other Southeast Asian country that has issued similar framework or guidelines except for Vietnam that is in process of developing its AI regulations to address the ethical use of AI.  These efforts reflect a growing recognition of the need to balance the economic benefits of AI with its ethical implications, promoting responsible AI use across the region.

Today, Singapore is a leader in AI regulation within the region. In January 2019, the Infocomm Media Development Authority developed the “Model AI Governance Framework” that provides detailed guidance on responsible AI deployment.
In Thailand, the “Artificial Intelligence Ethics Guidelines” was issued in October 2019. These guidelines set out principles for the ethical use and deployment of AI across various sectors, ensuring that AI technologies are developed and implemented responsibly and ethically within the country.
In Malaysia, the “National AI Framework” was launched in 2022 by the Government of Malaysia to support the ethical and responsible use of AI technologies. The framework, part of Malaysia’s National Artificial Intelligence Roadmap for 2021-2015, underscores Malaysia’s strategic commitment to ethical AI, aiming to harness AI’s transformative potential while ensuring fairness and transparency in its deployment.

At the time of writing this review, we are unaware of any other Southeast Asian country that has issued similar framework or guidelines except for Vietnam that is in process of developing its AI regulations to address the ethical use of AI.  These efforts reflect a growing recognition of the need to balance the economic benefits of AI with its ethical implications, promoting responsible AI use across the region.

What should the region do?
 

The EU AI Act and the ASEAN AI Guide share several similarities in their approach to regulating and guiding the ethical use of AI.  Both frameworks emphasize responsible AI deployment and aim to ensure that AI technologies are aligned with societal values and ethical principles.  Key similarities include:

Both EU and ASEAN have their own versions of AI regulations, with EU being more stringent with far-reaching implications. It is important for organizations to understand the impact the Act will have on their business, especially those that sell, import, distribute or plan AI products that are used within the EU as it will be the standard to follow and conform to.

As the region continues to evolve and the need for AI grows, ASEAN will need a more comprehensive regulatory framework to complement the Guide. In this, ASEAN can look towards the EU AI Act to align their strategies and influence their AI regulations. ASEAN can look towards the EU AI Act to whether it could be a model that they want to emulate. This will also depend on how technology evolves and the impact the EU AI Act has.

Even though the full provisions on the EU AI Act have yet to come into force, it is still good for organizations to develop a compliance AI strategy together with a business-focused AI strategy. As countries and territories across the region are at different stages on their digital growth, both the EU AI Act and the ASEAN AI Guide can offer guidance on how to navigate the ever-evolving AI landscape. While the AI Guide takes a non-binding approach to AI development the EU AI Act has financial implications for non-compliance. Organizations across the region should  ensure that their AI trajectory can navigate the challenges and harness opportunities presented by both set of regulations. 

Related content

Legal

KPMG legal professionals work with subject matter experts to deliver tailored and insightful legal advice to each of our clients.

KPMG Digital Gateway

The single platform solution that gives you access to the full suite of KPMG Tax & Legal Technologies you use.

Future of Tax & Legal Webcast Series

A webcast series dedicated to helping tax and legal leaders stay connected on top-of-mind business issues.

Our people

Hanim Hamzah

Asia Pacific Regional Leader for Legal Services, KPMG Law

KPMG in Singapore


Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat