KPMG: Establish guardrails to maximize the benefits of AI
PETALING JAYA, 6 June 2023 – Generative artificial intelligence (AI) models such as ChatGPT and DALL-E have opened a world of opportunities with the potential to transform businesses through automation at unprecedented speed. According to professional services firm, KPMG, it will take time and human expertise to unlock their full potential in a way that’s responsible, trustworthy, and safe.
KPMG’s global survey involving over 17,000 people from 17 countries leading in AI activity and readiness found that while 82 percent of respondents have heard of AI, three out of five (61 percent) are wary about trusting AI systems. Cybersecurity risks emerged as the dominant concern raised by 84 percent of respondents.
Alvin Gan, Head of Technology Consulting at KPMG in Malaysia, said, “Generative AI models have positive use across various business functions, from IT, human resources and operations to finance and more. For example, it can be used to contextualize ESG data and support reporting operations, helping organizations create statements that outline their ESG initiatives clearly. AI has also been used to review contracts to highlight potential conflict of interest clauses and draft clauses and terms to hasten the contracts process.”
However, he cautioned that business leaders need to be aware that these expanded uses do not come risk free. Many generative AI models are built to absorb user-inputted data to improve the underlying models over time. In turn, this data could be used to answer a prompt from other users, possibly exposing the organization’s intellectual property or trade secrets to the public. This scenario is especially likely when the organization’s employees are not trained in the proper use of AI applications with an emphasis on confidentiality and quality assurance.
Data quality and ethics are key concerns, as questions remain unanswered on the true owner of the content once it’s run through generative AI applications. Thus, the unrestricted use of generative AI applications can open the organization up to intellectual property infringement and a host of broader fraud, brand and reputational risks.
Alvin added, “When it comes to generative AI, users are not just using the solution but are also contributing to this technology’s self-learning evolution. The implication to the responsibilities of Chief Information Security Officers is serious where they need to shift from problem solving to problem defining and create new approaches for teams to work alongside machines to enhance business efficiencies in a way that doesn’t contravene applicable laws or professional standards.”
Riding on the global trend, AI adoption in Malaysia is observed to be increasing although still lagging behind neighboring countries as reported in the Malaysia National Artificial Intelligence Roadmap 2021-2025 (AI Map) released by the Ministry of Science, Technology & Innovation (MOSTI). Of particular interest is a survey reported in the AI Map that found only 16 percent of organizations in Malaysia that have implemented AI have ensured that their AI applications/systems are secured. Even fewer (10 percent) have developed risk management and cyber security policy for AI.
“As concerns over security, privacy, data trust and ethics grow, it’s important to be vigilant and ensure your organization is using AI while upholding digital trust. Organizations need to establish the necessary guardrails for its secure implementation and use in order to maximize the benefits of generative AI, and this includes addressing the potential cybersecurity gap at the Board level,” Alvin concluded.
 Trust in Artificial Intelligence: Global Insights 2023, KPMG in Australia and The University of Queensland Australia, 22 February 2023
 Malaysia National Artificial Intelligence Roadmap 2021-2025, Ministry of Science and Innovation, 2021
For media-related queries, please email firstname.lastname@example.org