The field of Artificial Intelligence (AI) promises to revolutionize our world, but with every technological leap comes a responsibility to navigate the complexities it unveils. Two areas of particular concern in the context of AI utilization are content and Intellectual Property (IP).

The Case of AI and the Global Landscape:

The incident involving Vietnamese authorities and AI underscores a global dilemma: the potential for AI to be exploited to generate divisive content, spread misinformation, and erode trust in institutions. This underscores the need for international cooperation in establishing ethical guidelines for AI advancement, transcending geographical boundaries.

Regions worldwide are swiftly advancing towards implementing standardized regulations for AI. As every nation acknowledge the potential benefits that technological advancements and AI development can bring, such as increased efficiency, productivity, enhanced decision-making processes, automation, and especially the ability to generate content based on training.

However, every coin has two sides. AI also exposes both developer and user to enormous risks of potential data breach, intellectual property infringement. Besides that, there can also be political risks as mentioned above. Therefore, regions all around the world have rapidly introduced regulations such as the Proposed AI Act of the EU, Canada's Directive on Automated Decision-Making, and China's New Generation AI Development Plan, among others. These regulations underline the importance of accountability, transparency, and fairness in AI systems.

Nonetheless, a comprehensive legal framework that targets artificial intelligence’s IP infringement has not been issued. The legal framework for AI and IP infringement is currently cobbled together using existing intellectual property laws like copyright and patents; or regulations addressing the immediate concern of harmful content.

Vietnam is not an exception, but we are looking forward to the result of Decision No. 127/QD-TTg issued on 26 January 2021, which aims at formulating a national strategy for the advancement of AI and underlines Vietnam's attention to this field, and promises the development of a legal framework for AI governance in a near future.

Legal Framework to Regulate Liability of Language Processing Platform AI-Generated Content:

For the time being, Vietnam’s legal provisions governing AI have been adapted by different existing technology-related regulations. The Cybersecurity Law 2018 mandates platform owners to actively prevent, detect, and remove harmful content upon request from cybersecurity authorities. This encompasses content that attacks the government, incites violence, spreads misinformation, harms the economy, or causes public panic. The Vietnamese Cybersecurity Law 2018 exemplifies the growing trend of regulations aimed at Internet censorship regarding AI-generated content. Given the broad coverage of the obligations applying to any administrator of an information system providing content in Vietnamese cyberspace, content generator platforms with AI operation are put at risk of violations caused by the potential restricted or prohibited content generated from their platform, even though the algorithm-based content generation is mostly automated. Given the law does not elaborate on the acceptable safe-harbor, institutions developing AI tools may consider adopting the following proactive measures:

  • Technology-based safeguards: Implement filtering and detection tools to identify and remove harmful content, political keyword with negative, vulgar, or misleading descriptions and adjectives.
  • Comprehensive Terms of Use and Internal Policies: Develop clear and detailed clauses outlining prohibited content, sensitive political keywords, content inciting violence, spreading misinformation, or defaming individuals or groups. Regularly review and update these policies to address emerging threats.
  • Transparency and Accountability: Clearly communicate your organization's content moderation policies and enforcement procedures to all users.
  • User Education Initiatives: Implement educational programs to promote responsible online behavior and equip users with critical thinking skills to identify and counter misinformation, regardless of their native language.
  • Limiting Content Sharing: Consider implementing restrictions on individuals sharing AI-generated content on social media, public websites, or other platforms to mitigate potential risks.
  • Point of Contact and Response Mechanism: Establish a designated point of contact and a clear process for promptly responding to requests from cybersecurity authorities regarding the removal of harmful content.

Intellectual Property right infringement:

Relating to the matter of Intellectual Property, the question of liability for IP infringement in the context of AI-generated content remains a contentious issue, particularly for platforms allowing user-uploaded documents. While the current focus lies on takedown procedures and content moderation for platforms like social media, the future of IP law regarding AI-generated works is still uncertain. Hence, there are several strategies that can be implemented to mitigate potential copyright risks and promote responsible use of AI in within organizations:

  • Utilize authorized data: Train your AI models exclusively on datasets for which you have legal rights to use. This can involve acquiring datasets with appropriate licenses or leveraging open-source datasets with clear licensing terms.
  • Content filtering: Integrate robust content filtering mechanisms into your AI platforms. These filters can effectively identify and prevent the generation or dissemination of copyrighted material.
  • User education and clear guidelines: Clearly outline in your user terms and conditions (T&Cs) and user guidelines that any AI-generated content, including potential exact copies or content that closely resembles existing copyrighted materials, is intended solely for study and teaching within your organization. Users should also be advised to refrain from reproducing such content for public publications without prior authorization to avoid potential copyright infringement.

AI's remarkable capability also presents a substantial risk of IP infringement for both developers and users. Many jurisdictions hold users accountable; the debate continues regarding whether users can be expected to anticipate such risks. While users are typically held responsible, AI's learning mechanisms can aid in detecting and preventing infringement, with users encouraged to obtain permissions for protected content use.

In Vietnam, due to the lack of legal frameworks specifically tailored for Artificial Intelligence, organizations are required to comply with the existing laws regarding IP. The current IP Law in Vietnam does not specifically address the issue of AI-generated content that infringes on IP rights. It does prohibit various unauthorized actions including the creation of derivative works, public performances, copying, distributing, renting of works without proper authorization, and the broadcasting or availability of works to the public through any channels, etc. However, it appears that these restrictions are designed to be enforced against individuals or organizations directly engaging in such activities, rather than being applied to AI algorithms or their creators. The evolution of laws and practices to adapt to the AI landscape is yet to be observed.

Conclusion

Technological advancement is rapidly transforming Vietnam, but a comprehensive legal framework specifically for AI is still under development. While this presents exciting opportunities, it also requires careful navigation. Organizations utilizing AI in Vietnam should be aware that the absence of an AI-specific legal framework doesn't exempt them from existing regulations.

Breaches of other established regulations related to cybersecurity, IP, and other relevant areas can still occur. This highlights the importance of adopting responsible practices and seeking legal guidance to ensure compliance with existing regulations and anticipate future developments in the AI legal landscape.

KPMG is dedicated to supporting Vietnam's responsible AI adoption. We understand the unique legal and cultural context of the Vietnamese market and offer tailored solutions for local organizations. Our team of Vietnamese and international lawyers can assist you with any advisory requests regarding the governance or deployment of AI in Vietnam.

This alert is for general information only and is not a substitute for legal advice.

If you have any questions or require any additional information , please contact Nguyen Thi Nhat Nguyet (Nina) or Tran Bao Trung or the usual KPMG contact that you deal with at  KPMG Law in Vietnam.

Send us your questions

Nguyen Thi Nhat Nguyet

Director
KPMG Law in Vietnam

Tran Bao Trung

Associate Director
KPMG Law in Vietnam