"Don’t be afraid. Fear has a destructive impact on many aspects of life", a renowned Dutch cabaret artist once said. "Fear pervades every aspect of life, from work to relationships, from taking a leap of faith to speaking one's mind, or from moving to a new country to falling in love. Fear is what keeps us from being authentic – with ourselves and others."

Organizations are becoming increasingly adept at swiftly identifying cyberattacks, thanks to the rapid progress in cybersecurity laws and regulations, threat intelligence, processes, and technology. Yet, over the past year, we seem to have become even more fearful of such attacks, due to increased cyber espionage, the rise of disruptive and destructive cyberattacks, and information operations to collect tactical information and disseminate disinformation.

Last week, the RSA Conference 2023 took place in San Francisco. It is the world's largest cybersecurity conference, where experts in the field talk about the latest and expected developments in cybersecurity. Understandably, a hot topic this year was Artificial Intelligence (AI). Some speakers even began their speeches with a script written by ChatGPT, or an introduction via a deep fake video. The innovations that ChatGPT and other AI applications bring are unstoppable – so many believe we should embrace them. It is estimated that – within a period of three months – ChatGPT reached 100 million monthly active users earlier this year, making it the fastest-growing application in history. By comparison, TikTok took nine months to reach 100 million users, and Instagram took more than two years.

It is understandable that there are concerns about these developments. But should we be afraid? Almost every innovation comes with negative side effects. Such as for instance job losses, social inequality, climate change, or infringement of people's privacy. When the internet first became popular, there were people who were afraid of it. This was mainly due to unfamiliarity: the internet was new, and many people did not know what to expect from it. They were afraid of the unknown aspects of it and the possible dangers it brought with it. Although the concerns were understandable, over the years, the internet has proven its value as a valuable tool for communication, information exchange, and collaboration. People have learned how to protect themselves and their personal information online, and laws and regulations have been put in place to prevent abuse.

During the development and application of AI products, a storming & norming phase will also take place, in which disagreements about the right application may arise. ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead”, reports The New York Times headline. "For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm." Along with numerous scientists, Hinton is now advocating for AI regulation. The European Commission is currently drafting the AI Act, which is a proposed law governing artificial intelligence and the first of its kind by a major regulator. Similar to the EU's General Data Protection Regulation (GDPR) in 2018, the AI Act has the potential to establish a global standard for determining the extent to which AI has a positive or negative impact on people's lives. Developing a common vision and shared values and norms will lead to a more controlled application of AI.

Unfortunately, we see that malicious actors (hackers) have also discovered AI applications in this early stage of innovation. During the RSA Conference, Kevin Mandia (CEO of Mandiant) reported that the number of zero-day attacks has increased significantly over the past years: “Between 1998 and 2019, an average of ten zero-day attacks occurred each year. In 2020, we saw more than a doubling, and there were even 81 attacks in 2021, and 55 in 2022.” What is going on?

One striking fact is that in addition to zero-day attacks on the dominant platforms of Microsoft, Apple, and Google, there is an increase in zero-day attacks occurring in another category: network security, including firewall and VPN devices. Nation-states have a leading position when it comes to exploiting vulnerabilities on the perimeter of organizations' networks. In the case of CVE-2022-41328 (a reported vulnerability in network security equipment), it was concluded that the attacks were highly targeted, with some evidence that threat actors preferred government networks. This is also corroborated by Kevin Mandia: “The attackers employed highly advanced techniques, including those used for reverse-engineering parts of the operating system.” The general assumption is that they may have used AI to achieve this. 

“Vulnerabilities in network security equipment pose a serious risk. Attackers gain access to an organization through the security layer of a network where there is no Endpoint Detection & Response (EDR) service”, says Mandia. "They've written code in places where we couldn't even conduct forensic investigations because it's on the protected part of a device. It's a pretty smart place to be for an attacker."

This type of attack is incredibly difficult to detect, let alone repel, and it's a frightening prospect. However, we can take targeted action just as we would with other threats. Our approach to improving security operations begins with identifying the ‘normal’, precisely for the reason that once you have insight into normal behavior (i.e., traffic on your network and in the cloud), you can detect anomalous behavior (i.e., threats and attacks).  

Is there a need to fear AI in the context of cybersecurity? The answer is no. Instead, AI offers us the opportunity to take control ourselves and use AI to more effectively and quickly detect attackers based on anomalous behavior in the abundance of security events. Traditional detection methods, such as signature-based detection or EDR endpoint detection, are no longer sufficient.

To make security future-proof, it's important to focus on the cloud, endpoints, and network – and to use AI in doing so. The use of AI in security helps our analysts identify threats faster amid a large number of possible security events. But don't forget to also pay attention to changing processes, skill sets, and governance models. Security professionals worldwide are aware of the need for these adjustments, as evidenced at the RSA Conference 2023.  A reassuring thought indeed.

Multilingual post

This post is also available in the following languages