Accurate AI responses demand appropriate queries
So, why is ChatGPT having issues – and does it mean we can’t rely on its capabilities? The answers lie in knowing its limitations.
First and foremost, it’s important to note that ChatGPT is an example of Artificial Narrow Intelligence (ANI) and not Artificial General Intelligence (AGI). ANI systems are very good at performing one type of task for which they have been trained, but they are not suitable for tasks in which they have not been trained, however simple. For example, an ANI system designed to generate images will likely not be able to solve a simple mathematical question such as What is five plus seven?6
Secondly, ChatGPT is a generative AI model – designed to generate new content based on a clear set of inputs and rules. Its primary application is to generate human-like responses. However, ChatGPT lacks human-like reasoning skills. In ChatGPT’s own words: “I am designed to be able to generate human-like text by predicting the next word in a sequence based on the context of the words that come before it.”
Therefore, for ChatGPT to be considered trusted, it’s the responsibility of each user to apply its AI capabilities to a suitable use case. Equally important, the developers should use reliable data sets to train the AI model and apply relevant bias and content filters. In the case of classical computing, the concept of GIGO – Garbage in, garbage out – is pervasive and holds true. But when it comes to AI, it’s GISGO – Garbage in, Super garbage out – making it critical that developers use reliable data to train the AI model.
The good news is that ChatGPT is quite aware of its limitations and can appropriately respond to users. Also, ChatGPT combines a supervised and reinforcement learning model, which provides the benefits of faster learning through a reward system and the ability to learn based on human inputs.
Establish guardrails to maximize the benefits of AI
As organizations explore use cases for powerful new AI solutions like ChatGPT and others, it’s crucial that cyber and risk teams set guardrails for secure implementation. The following are some steps to help get ahead of the hype. This is a non-exhaustive list and merely initial steps to consider as AI continues to emerge:
- Set expectations for how ChatGPT and similar solutions should be used in an enterprise context. Develop acceptable use policies, define a list of all approved solutions, use cases and data that staff can rely on, and require that checks be established to validate the accuracy of responses.
- Establish internal processes to review the implications and evolution of regulations regarding the use of cognitive automation solutions, particularly the management of intellectual property, personal data, and inclusion and diversity where appropriate.
- Educate your people on the benefits and risks of using these AI solutions, as well as how to get the most out of them, including suitable use cases and the importance of training the model with reliable datasets.
- Implement technical cyber controls, paying special attention to testing code for operational resilience and scanning for malicious payloads. Other controls include, but are not limited to:
- Multifactor authentication and enabling access only to authorized users;
- Application of data loss-prevention solutions;
- Processes to ensure all code produced by the tool undergoes standard reviews and cannot be directly copied into production environments;
- Configuration of web filtering to provide alerts when staff accesses non-approved solutions.