Will AI help or hinder cyber security? Probably both.

On one hand, AI is a crucial to building future cyber resilience, with security teams using it to improve threat detection and help strengthen business defences. On the other hand, AI capability in the wrong hands can be used to quickly exploit weaknesses we’re not even aware of yet. Add to this the risk of businesses embracing AI adoption without security consideration, and the fact AI is available to anyone means hackers can become more sophisticated and faster.

Chief Information Security Officers (CISOs) must face the tough job of enabling their business to embrace AI securely, protecting the organisation while also keeping pace with the evolving threat landscape and emergence of new, sophisticated AI driven threats.

We explore how your organisation can remain proactive and resilient into the future by remaining vigilant and having the right guardrails and threat response strategies in place. We also look into emerging opportunities for generative AI to be used to strengthen cyber security.

Cyber or AI questions?

Learn how we can support your cyber or AI journey.

Unlocking the potential of AI … carefully

As organisations start making the most of AI’s transformative power, security and privacy will play an even greater role in business strategy. After all, AI is more than just new technology, it needs to align with business objectives and ethical standards, without introducing risk or compromising robust security and privacy practices.

Starting with a solid AI framework

To make the most of the efficiencies and value AI brings to an organisation, there must be frameworks in place to balance risk management and governance from the start.

If an organisation considers privacy and security from the outset, they can become natural components of day-to-day business operations. Just as cloud technology became a standard part of everyday business, AI adoption is expected to evolve into a routine aspect of business operations, making a secure AI approach key to future growth.

Cross-functional cooperation

Establishing and maintaining trust in AI is critical for brand reputation and business growth. This requires cross-functional cooperation and a unified security, privacy, data science and legal strategy.

On 5 September 2024, Australia released its first set of AI Safety Standards to help organisations navigate the governance and controls needed to make sure AI is implemented safely and in the best interest of customers, the organisation, employees and society.

Security-by-design thinking

To integrate privacy- and security-by-design thinking with AI and other emerging technologies, those who manage them must do so with privacy- and security-first mindsets.

This includes how an organisation develops its AI algorithms, having a clearly defined purpose for the technology, and relevant AI training data that aligns to the business purpose.

Three AI and cyber steps to take today

Keeping your AI technology cyber secure involves taking the following steps:

  1. Align your AI framework with the Australian AI Safety Standards.
  2. Develop solid AI governance with cross-functional support from business leaders. Learn about KPMG’s Trusted AI framework.
  3. Ensure the purpose of the AI technology is clearly defined and documented, training data is relevant, appropriate, and consent is secured.

Opportunities for generative AI-enabled cyber security

While it’s still early days, we’re seeing many CISOs look into how generative AI can be effectively applied across their cyber functions, with the new technology showing promise in four key areas:

Cyber forensics and response

Threat detection and rapid response

Generative AI can enhance cyber security by quickly analysing data and error codes for threat detection and response, and improving alert triage based on checklists and past incidents.

Phishing and fraud prevention

Gen AI can identify suspicious email patterns, monitor user behaviour for fraud in real time, and scrutinise accounts payable to prevent unauthorised payments.

Insider threat detection

By auditing activity like login times, file access patterns and network activity, gen AI can learn normal user patterns and flag unusual behaviour that may indicate a threat.

Security operations

Vulnerability management

Gen AI enhances existing methods for detecting software vulnerabilities by further advising on response strategies, such as acting as a chatbot adviser for context-specific remediation based on vendor tool recommendations.

Attack surface management

Monitoring the perimeter (e.g. firewalls, servers and public cloud transactions) is an evolving challenge. Gen AI can be used to understand baseline flags, and any changes to help assess the degree of risk.

Metrics, dashboard and reporting

Generative AI is very useful at turning data into plain language insights or graphs. It can also compile internal and external information together with recommendations for next steps and how to improve processes.

Identity and access

Least-privilege security

Access control is currently a manual and discretionary task with inconsistent monitoring. Gen AI has the potential to automate this process, such as ensuring job-specific access and systematic monitoring of baseline requirement changes.

Identity protection

Impersonation and other misuse of digital identities can be forestalled through gen AI applications that research and report patterns of behaviour, identifying signals that point to suspicious activity.

Third-party supply chain management

Inherent risk profiling

Gen AI can streamline the cumbersome task of assessing third-party risk postures, enabling organisations to evaluate all current and potential partners efficiently and comprehensively.

Ongoing risk management

Gen AI tools can enable automated monitoring of security compliance, mitigation strategy status, and changes impacting risk profiles, providing continuous due diligence across the entire supply chain.

So, AI and cyber are friends – but proceed with caution

It’s fair to say, the opportunities arising from bringing together AI and cyber is exciting, and offer many benefits to the safety of an organisation – so long as the AI is aligned with the business purpose, implemented with the right guardrails in place, and has the backing of leaders across functions.

AI and cyber specialists: how KPMG can help

With in-depth knowledge and experience across AI and cyber security, KPMG is uniquely positioned to support organisations with their security and technology needs.

We offer comprehensive cyber security services from boardroom strategy to data centre protection, aligning cyber defences to your business priorities. And we help organisations navigate the ever-changing AI landscape focusing on trust, transparency and accountability.

Ask us how we can help you anticipate tomorrow, and move more efficiently with secure and trusted technology.

Contact KPMG's AI and cyber specialists

If you have any questions about how we can support your cyber or AI journey, please get in touch.

KPMG's AI and cyber security services

Related insights