As we move through 2024, sophisticated threats such as AI empowered (phishing) attacks, ransomware (extortion focused), deepfake technology, supply chain attacks, IoT device exploitation are a cause for concern to a lot of organisations worldwide. As a cyber leader in an organisation or a board member it is crucial to understand and address risks associated with these attacks to safeguard business operations, protect sensitive data, and maintain customer trust.

In addition to the vast array of cyber threats, organisations also face the increasing challenge of dealing with threat actors who leverage the easily accessible AI tools to enhance their malicious efforts. Staying ahead of these cyber challenges requires not just vigilance, but also continuous innovation from organisations in their cyber defence capabilities.

In this blog we will highlight the dual-edged nature of AI in cyber security. While we explore the potential of AI in fortifying cyber defences, enhancing threat detection and response capabilities, we also deliberate on the ways adversaries are harnessing AI to orchestrate more sophisticated and stealthier attacks. We aim to provide a balanced view that not only promotes the advancements of AI-powered cyber security, but also sheds light on the emerging challenges posed by AI, especially in the hands of threat actors. In doing so, we consider the advantages and risks of adopting AI from a CISO’s point of view and take the more technical reader in an overview of exploring AI capabilities used both for defence and offense.

AI in Incident Response and Security Observability

CISO’s Corner

Fulfilling the CISO’s role in the era of AI advancements requires courage and due diligence in exploring new frontiers. The main question is: How to safely explore and adopt the benefits of innovative technology that is introduced by AI to strengthen cyber security, while managing the risks of the unknown?

Threat actors are already leveraging AI-driven tools to automate tasks like scanning networks for vulnerabilities, launching attacks, crafting convincing phishing emails, and developing more sophisticated malware. The use of AI for generating malicious code used in cyber attacks diminishes the need for cyber and scripting expertise, requires little effort and costs and , together with automation, allows for launching large scale attacks relatively easy. CISO’s can therefore expect a rise in number and complexity of attacks. Combined with the cyber security skills shortage, this stretches cyber teams in detecting and responding to this advanced threats.

Organizations have the opportunity to adopt AI in their cyber defence strategies, offering several advancements in monitoring, analysing, and responding to security threats. Some examples include:

  • Real-Time Data Processing: AI systems like IBM's QRadar EDR can process and analyze large volumes of data in real time, providing comprehensive and dynamic insights into security logs, network traffic, and user behavior patterns.
  • Automated Incident Reporting: Google's generative AI tools can automate the creation of incident response reports and generate incident summaries.
  • Behavioural Analytics and Anomaly Detection: AI can identify and learn normal user behavior and detect deviations that may indicate a security threat.
  • Predictive Threat Analysis: AI systems can use historical data to predict which vulnerabilities are likely to be exploited next.
  • AI-Powered Training Simulations: AI-driven tabletop exercises help prepare cyber teams by providing realistic training environments where they can practice responding to various attack vectors, improving their preparedness and response capabilities​

What is Security Observability?

Security observability is the ability to proactively monitor, analyse, and respond to the internal and external security threats facing an organization. It goes beyond traditional monitoring, incorporating advanced analytics to provide deep insights and predictive capabilities that inform security operations and incident responses.

Besides early detection, AI-driven automation in security observability can significantly reduce the burden on cyber teams by filtering out false-positives. Using data from previous false alarms to enhance threat detection capabilities and reduces time spent on investigating non-issues. For example, IBM in their study claims that QRadar EDR’s Cyber Assistant, an AI-powered alert management system, has helped clients reduce the number of false positives, that analysist would otherwise spend time on, by 90%, on average. This improves the accuracy of threat detection and allows cyber professionals to focus their efforts on high impact threats.

By deploying AI’s capabilities to automate incident response actions, such as isolating affected systems, blocking malicious IP addresses, and applying patches, threats can be contained before they spread and therefore damage is minimized. Moreover, AI is becoming instrumental in correlating alerts from different systems to identify a single, high-priority incident, thereby streamlining response efforts. Detailed AI-generated information and recommendations (e.g. using Copilot for Security) help cyber teams in informed and rapid decision making during incident response. For which cyber teams are prepared well, using AI-powered tabletop exercises in training by simulating cyberattack scenarios to enhance organizational readiness.

Altogether, with AI driven tooling taking on part of the time and effort-consuming and quick-win tasks, cyber teams can focus on the more challenging and complex threats.

AI is not the holy grail

Integrating AI into cyber security processes comes with new types of risks. These risks primarily surround the potential for AI to be manipulated e.g. by exploiting a vulnerability, or produce inaccurate results due to biased data inputs or incorrect assumptions,  also called hallucinations. As AI algorithms become increasingly complex and autonomous, demonstrating integrity of AI models is challenging. Even developers of these models do not always fully understand  the complex behaviour of AI models. The lack of transparency and explainability in how these AI models arrive at their conclusions, make it difficult to fully trust them and ensure accountability of it.

You still need your SOC

It is imperative to recognize that AI responses should not be accepted uncritically. Ensuring the integrity of AI systems is paramount to prevent introducing new vulnerabilities  and flaws into the security framework.  Strict checks and validation of AI-driven insights and outputs are essential to maintaining accuracy and reliability, applying the trust, but verify principle that most cyber security professionals are so familiar with.

In the near future, a lot of cyber security tasks will be handled by AI, making proficiency in AI tools and systems a fundamental skill, similar to being skilled in using search engines. Cyber security professionals must not just be skilled at leveraging AI, but also possess the ability to verify the results and decisions generated by these systems. This entails a deep conceptual understanding of AI to critically assess and challenge its methodologies, ensuring that reliance on AI does not overshadow the necessity for human judgment and common sense. So while CISOs are encouraged to proactively explore and familiarize their cyber workforce with AI and the application in their work, this should be done with due care and continuous education.

In this evolving landscape, cyber security analysts will continue to play a crucial role. Rather than reducing  SOC teams, organizations must invest in developing new skills within these teams to effectively collaborate with AI systems. This approach will enable SOCs to leverage AI advancements while retaining the indispensable human element, thereby ensuring secure and resilient organisations.

Although undeniably powerful, AI's effectiveness still depends on human involvement.

Deep Dive into AI Capabilities

Tech Corner

AI technologies are revolutionizing how we understand and respond to cyber security incidents. Below, we explore some advanced AI capabilities that are shaping the landscape:

  • Prompt Engineering: This technique involves crafting specific inputs to AI systems to produce desired outputs. In cyber security, prompt engineering can be pivotal. For example, it can be used to create prompts that help AI identify and categorize threats more effectively, improving response times and accuracy in threat detection.
  • Automated Malware Detection: AI models, trained on vast datasets, are capable of distinguishing between benign and malicious software. This capability is essential for identifying new threats as they emerge, allowing for real-time updates and adaptive defence mechanisms that keep pace with evolving malware strategies.
  • Containment Automation: AI can automatically block IPs or quarantine devices that are compromised. This involves integrating automated patch management to swiftly close vulnerabilities, reducing the window of opportunity for threat actors to exploit weaknesses in the system.
  • Pentest GPT: This can be used in penetration testing (pentesting) to simulate attacker behaviour. By generating various attack vectors, it can help in identifying potential vulnerabilities within a system that traditional methods might miss.

Threat Actors vs. Defenders

In the continuous contest between threat actors and defenders, AI provides significant advantages to both sides. It is imperative for defenders to recognize these advantages and develop strategies to counteract them.

Defensive Strategies

To counter these advanced AI-driven attacks, defenders must adopt equally sophisticated AI solutions:

  • AI-Powered Threat Detection: Leveraging AI to enhance threat detection capabilities is crucial. AI models can analyse vast amounts of network data to identify unusual patterns indicative of an attack initiated by a threat actor.
  • Real-Time Response: Implementing AI for automated incident response can significantly reduce reaction times. AI can isolate affected systems, block malicious IPs, and initiate containment protocols autonomously.
  • Predictive defence: AI can be used for predictive analytics to foresee potential attack vectors. By analysing trends and patterns in cyber threats, AI can help organizations anticipate and mitigate risks before they materialize.
  • Continuous Learning and Adaptation: AI systems must be continuously trained with new data to adapt to emerging threats. This ongoing process ensures that defence mechanisms remain effective against the latest attack strategies.

Attacker Advantages

  • Speed and Efficiency: AI enables threat actors to automate and accelerate their operations. Machine learning algorithms can rapidly scan for vulnerabilities, launch attacks, and adapt strategies in real-time, making it challenging for defenders to keep pace.
  • Sophistication: AI can be used to develop more sophisticated attack methods. For instance, machine learning algorithms can craft phishing emails that are more convincing by analysing successful phishing tactics.
  • Scalability: AI allows threat actors to scale their operations, targeting multiple systems simultaneously with minimal human intervention. This capability can overwhelm traditional defence mechanisms.

In the relentless contest between threat actors and defenders, AI not only levels the playing field but also provides a strategic advantage, allowing defenders to predict, detect, and mitigate threats with unprecedented speed and accuracy.

Challenges in AI Adoption by defence

Despite the transformative potential of AI in enhancing cyber security defences, the adoption of these technologies within the cyber security teams has been notably slower compared to how quickly threat actors are taking advantage of AI capabilities. This cautious approach is driven by several critical factors:

  • Rigorous Testing Requirements: Cyber defence applications demand exceptionally high levels of reliability and security. As a result, AI solutions must undergo extensive testing and validation processes before they can be deployed. These rigorous requirements ensure that AI technologies perform as expected under various scenarios but also significantly prolong the adoption timeline.
  • Integration Challenges: Integrating AI into existing defence infrastructure poses significant challenges. Legacy systems, often built with proprietary technologies and outdated architectures, may not seamlessly accommodate modern AI solutions.
  • Continuous Tuning and Maintenance: AI models require continuous tuning and updates to remain effective against evolving threats. This ongoing maintenance can be resource-intensive and demands a high level of specialized expertise.
  • Ethical and Security Concerns: The deployment of AI in cyber defence also raises ethical and security concerns. Ensuring that AI systems operate within ethical boundaries and do not introduce new vulnerabilities is paramount. This includes ensuring that AI systems are used in ways that respect human rights and do not lead to unintended harmful consequences. It should be noted that AI systems are not only tools for enhancing cyber security but also potential targets for cyber-attacks.

The Future Ahead

The dual use of AI has ignited a race between threat actors and defenders, necessitating a balanced and continually evolving security strategy. As defenders, we must recognize AI's potential to transform security operations by making them more proactive and intelligence-driven. AI-enhanced incident response and security observability represent the next frontier in cyber security, requiring an understanding of how best to apply these technologies. By embracing AI, organizations can defend against current threats and pave the way for future advancements, staying ahead of threat actors who are increasingly sophisticated in their use of AI.

Our call to action for CISOs and cyber security teams is clear: start now, explore, and adopt AI in cyber security practices.

  1. Begin Immediate Exploration: Start by identifying AI technologies relevant to your organization’s security needs.
  2. Undergo Training: Enrol your team in practical training programs focused on AI applications in cybersecurity, such as anomaly detection, automated threat response, and predictive analytics.
  3. Pilot AI Implementations: Integrate AI tools in a controlled environment to monitor their effectiveness and understand their operational impact.
  4. Scale AI Solutions: Once initial trials are successful, expand the use of AI across various cybersecurity operations.
  5. Continuous Learning and Adaptation: Regularly update your team’s skills with ongoing training to keep up with evolving AI technologies and threat landscapes.

AI has evolved from a buzzword into a tangible asset for cyber security applications. It’s time for organizations to embrace this technology and get trained to use it effectively to stay ahead of threat actors. The urgency of this call is underscored by a recent SOC AI event, where only two out of 70 attendees indicated they were using or exploring AI in daily operations. This highlights the untapped potential and the imperative for organizations to adopt AI in cybersecurity measures promptly, positioning themselves to ride the wave of AI innovation and enhance security resilience and effectiveness.

At KPMG, we explore the best ways of integration of artificial intelligence (AI) into our cyber security strategies to enhance incident response and security observability for our clients. As we continue to explore these technologies, our goal remains to spark interest and conversation about the potential and challenges of AI in cyber security, while driving business towards secure, AI-enhanced cyber solutions.