As featured on BusinessMirror: Building trust in AI is a shared responsibility
Breakthroughs like ChatGPT are taking off, but accuracy is a two-way street.
Can a ground-breaking AI chatbot gain instant celebrity status? It’s an intriguing question amid the soaring popularity of a new chatbot, ChatGPT, from artificial intelligence research company OpenAI.
Designed for natural-language processing, such as text generation and language translation, ChatGPT is useful for a range of applications, like creating customer-service bots, drafting social media posts, conducting research, responding to online queries, and even writing basic code or poetry. ChatGPT’s ability to simulate human conversation has sparked a remarkable wave of attention and use since its launch.
Consider the numbers among some of the hottest online startups of recent times. Netflix needed about three years to hit the revered million-user milestone. Facebook needed just 10 months and Instagram a mere 2.5 months. ChatGPT? Designed using one of the largest and most-advanced language models currently available, ChatGPT needed just five days to reach the million-user mark.
Impressive indeed. But as the numbers climb skyward, ChatGPT’s explosive popularity raises another interesting question that has sparked wide online debate – Can ChatGPT be trusted? The answer to this question isn’t simple and depends on the context and application.
The AI industry in the Philippines is still in its early stages but is already showing promising growth and potential. Several start-ups and companies have made their attempts in penetrating the market using their AI-enabled products and services, assuring optimal data analytics and processing.
However, such innovation in technology is not free from issues and concerns. KPMG in the Philippines Technology Consulting Head Jallain Marcel S. Manrique shared that “AI services like ChatGPT, if not backed by credible and trusted data sources and strong data protection features, could pose a great risk to companies through inaccuracies and the potential of breaches and cyberattacks. This is a major consideration, especially for businesses that handle confidential financial information and customer data.”
Since ChatGPT is still in development, extensive and thorough research must be done to examine the accuracy, safety and security of the AI service in handling various kinds of data. In the end, the pros can outweigh the cons once the needed walls of protection are well planned and rigorously laid out.
Jallain Marcel S. Manrique
Technology Consulting Head
KPMG in the Philippines
What could go wrong?
There’s no doubt that ChatGPT provides immense opportunities for businesses and individuals, and it has enormous potential to expand its capabilities. While some excited users seem to be anticipating emerging superpowers for ChatGPT, we’re also witnessing some disappointment in user dialogue online. The AI platform is trained on publicly available information, and here are some examples of where ChatGPT's responses seem inappropriate or clearly off the mark:
- Unintended bias: Some ChatGPT responses may exhibit bias. A few examples on Twitter show that users have managed to bypass ChatGPT’s filters and provide responses that may be biased. Since ChatGPT has been trained on publicly available information, some responses may likely carry an unintended bias.
- Incorrect information: The internet is full of incorrect information and fake news, and controversial information is often more likely to go viral than factual information. As a result, ChatGPT can provide incorrect results if the underlying knowledge base is flawed.
- Logical errors: ChatGPT has failed to correctly answer simple, logical questions and riddles. For example, even a fifth grader could likely answer this question: "If my sister was half my age when I was six years old, how old would she be when I am 70?" Surprisingly, ChatGPT is not able to deduct a correct answer.
- Dated information: ChatGPT will sometimes provide incorrect information even though new facts disprove or supersede earlier information. The current model is trained using information available up to 2021. Even though the model highlights this fact, in some scenarios, incorrect information may be provided.
Accurate AI responses demand appropriate queries
So, why is ChatGPT having issues – and does it mean we can’t rely on its capabilities? The answers lie in knowing its limitations.
First and foremost, it’s important to note that ChatGPT is an example of Artificial Narrow Intelligence (ANI) and not Artificial General Intelligence (AGI). ANI systems are very good at performing one type of task for which they have been trained, but they are not suitable for tasks in which they have not been trained, however simple. For example, an ANI system designed to generate images will likely not be able to solve a simple mathematical question such as What is five plus seven?
Secondly, ChatGPT is a generative AI model – designed to generate new content based on a clear set of inputs and rules. Its primary application is to generate human-like responses. However, ChatGPT lacks human-like reasoning skills. In ChatGPT’s own words: “I am designed to be able to generate human-like text by predicting the next word in a sequence based on the context of the words that come before it.”
Therefore, for ChatGPT to be considered trusted, it's the responsibility of each user to apply its AI capabilities to a suitable use case. Equally important, the developers should use reliable data sets to train the AI model and apply relevant bias and content filters. In the case of classical computing, the concept of GIGO – Garbage in, garbage out – is pervasive and holds to be true. But when it comes to AI, it's GISGO – Garbage in, Super garbage out – making it critical that developers use reliable data to train the AI model.
The good news is that ChatGPT is quite aware of its limitations and can appropriately respond to users. Also, ChatGPT combines a supervised and reinforcement learning model, which provides the benefits of faster learning through a reward system and the ability to learn based on human inputs.
Establish guardrails to maximize the benefits of AI
As organizations explore use cases for powerful new AI solutions like ChatGPT and others, it’s crucial that cyber and risk teams set guardrails for secure implementation. The following are some steps to help get ahead of the hype. This is a non-exhaustive list and merely initial steps to consider as AI continues to emerge:
- Set expectations for how ChatGPT and similar solutions should be used in an enterprise context. Develop acceptable use policies, define a list of all approved solutions, use cases, and data that staff can rely on, and require that checks be established to validate the accuracy of responses.
- Establish internal processes to review the implications and evolution of regulations regarding the use of cognitive automation solutions, particularly the management of intellectual property, personal data, and inclusion and diversity where appropriate.
- Educate your people on the benefits and risks of using these AI solutions, as well as how to get the most out of them, including suitable use cases and the importance of training the model with reliable datasets.
- Implement technical cyber controls, paying special attention to testing code for operational resilience and scanning for malicious payloads. Other controls include, but are not limited to:
- Multifactor authentication and enabling access only to authorized users;
- Application of data loss-prevention solutions;
- Processes to ensure all code produced by the tool undergoes standard reviews and cannot be directly copied into production environments;
- Configuration of web filtering to provide alerts when staff accesses non-approved solutions.
Generative AI is here for the long run – let’s ensure smart use
The benefits of ChatGPT are clear, and its introduction will accelerate the adoption of AI in business and society. But to maximize its benefits, accelerate the growth of your business, and maintain digital trust, responsible use of ChatGPT and other generative AI models is critical.
The information contained herein is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavor to provide accurate and timely information, there can be no guarantee that such information is accurate as of the date it is received or that it will continue to be accurate in the future. No one should act upon such information without appropriate professional advice after a thorough examination of the particular situation.
The excerpt was taken from the KPMG Thought Leadership publication: https://advisory.kpmg.us/blog/2023/building-trust-ai-shared-responsibility.html
© 2023 R.G. Manabat & Co., a Philippine partnership and a member firm of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. All rights reserved.
For more information, you may reach out to KPMG in the Philippines Technology Consulting Head Jallain Marcel S. Manrique through ph-kpmgmla@kpmg.com, social media, or visit www.home.kpmg/ph.
This article is for general information purposes only and should not be considered professional advice to a specific issue or entity. The views and opinions expressed herein are those of the author and do not necessarily represent KPMG International or KPMG in the Philippines.