We have been hearing a lot about the new security risks created by AI and GenAI of late. That is quite understandable, but while it is certainly true that the technology does come with risks, it doesn’t in itself constitute a new security risk. It simply alters existing risks but the fundamentals of how we approach those risks haven’t really changed. As with so much else in the cybersecurity realm, humans remain the best firewalls, says Dani Michaux and Jackie Hennessy, KPMG.

When considering AI security, the first step is to understand what AI is and where and how it is being used in the organisation and to what purpose. If we can’t answer these questions, we can’t begin to secure it.

Unfortunately, as things stand there is no common understanding or definition for what AI is. That does make it difficult to put the necessary controls and governance in place.

Added to these gaps in understanding is the difficulty in defining exactly what an AI data breach is.

That brings us back to the first principles of security. Guardrails proportionate to the technology and its use must be put in place. And that begins with purpose.

The nature of the security measures will change according to what the AI is being used for. A useful analogy is a car. Not all cars are the same. Sports cars are very different to family saloons and racing drivers are not the same as normal users. The purpose changes the nature of the vehicle and that dictates the safety controls and measures to be applied.

Third-party use

AI is the same. It is an amazingly powerful technology, but the security risks it presents and the measures and controls required to address them will be dependent on the purpose to which it is being put.

For example, Microsoft Copilot is already part of Bing. Does that mean a Bing user in an organisation should be regarded as an AI user? Does it mean that new policies and controls have to be put in place for that use? Probably not. After all, the AI in that instance is only being used to augment an existing tool.

Similarly, AI could be embedded in other third-party products or internal tools used for specific limited applications. Furthermore, third party suppliers could be using AI in the services they provide to the organisation.

In these circumstances organisations can struggle to establish if AI is in use and where it is being used. In many cases, the AI users just have to hope it has been secured.

"Constant vigilance is required to detect possible but improbable outputs and to establish if they are hallucinations. "

The importance of guardrails

Moving beyond mere hope, it is again a question of purpose. What are the outputs the organisation is hoping to get from AI? How can it ensure that AI usage, in all circumstances, is responsible? How can it minimise access to datasets to those that are absolutely necessary for the purpose involved.

Data quality is also of key importance. There is the potential for data to be accidently or deliberately poisoned and this can corrupt the outputs from the AI. Guardrails are needed for how users interact with the systems and tools and how they access and input data.

The potential damage caused by corrupted data is increased as organisations make more and more decisions using AI with no human oversight. There are ways of dealing with this and of reducing the risks posed. The first is through good data governance. Bad data will produce bad decisions and the importance of robust data cleansing processes cannot be emphasised enough. The other way is through constant monitoring of the outputs to detect anomalous behaviours and results.

Data protection regulations also come into play here. If the AI is to be used to drive changes in consumer behaviour, for example, this needs to be done responsibly and data usage must be in accordance with regulations.

The other key question is how to eliminate or at least reduce hallucination. The first step is detection and that is not an easy task. From a human perspective, vivid dreams are often very easy to confuse with reality. Did I dream last night? How do I know it was a dream?

The same applies to AI. How do you ensure it doesn’t make things up? Constant vigilance is required to detect possible but improbable outputs and to establish if they are hallucinations. There is a lot more research to be done in the space.

"GenAI is different to most of the AI systems in use in organisations up until now."

Understanding usage

When it comes to securing AI, we typically ask clients to co-create solutions with us. It is such a new space, and no two organisations are the same and they are not going to use AI in exactly the same way. KPMG has developed frameworks to guide organisations in the safe and responsible use of AI. These frameworks cover the people, processes, policies and fundamental governance principles required.

In essence, it is much the same as it was for the cloud. It involves looking at the principal uses of the technology, where and when it will be used, and considering what controls are required for its usage. The principles remain the same, it’s just the specifics that are different.

GenAI is different to most of the AI systems in use in organisations up until now, but it is not revolutionising the way we think about cybersecurity despite what many people think. It is probably causing our thinking to evolve, but that doesn’t mean we have to change our entire security posture.

One of the questions relates to what happens to the data input to GenAI models. We had the same conversations about the use of OneDrive on smartphones and in relation to WhatsApp. We resolved those questions, and we will resolve this one too. GenAI does not present a massive new cyber risk scenario.

The threat of ransomware

On the other hand, we are seeing criminals using GenAI to become more effective and quicker. The ransomware threat is increasing and becoming faster. This presents fundamental challenges for security teams and organisations in dealing with ransomware attacks 100 times faster than before. The risk is heightened but it is not new.

Part of the solution is to deploy AI to bolster cyber defences, but the human element of the equation remains critically important. GenAI is also being used to create more convincing phishing emails and produce new ones at an extremely rapid pace. Again, it is not a new threat, but it is potentially more potent.

It is not just a technology play. Humans are the ones who are best able to detect phishing emails and prevent ransomware attacks. Ultimately, the key to reducing risk is to bring human critical thinking and scepticism to bear. There is no substitute for keeping humans in the loop.

Get in touch

At KPMG we understand the pressure business leaders are under to get it right on tech and AI.

To find out more about how KPMG perspectives and fresh thinking can help your business please contact Dani Michaux or Jackie Hennessy of our AI team. We’d be delighted to hear from you.

Discover more in Artificial Intelligence