error
Subscriptions are not available for this site while you are logged into your current account.
close
Skip to main content

       

      In the age of AI, fraud doesn’t always look suspicious. Increasingly, it looks like business as usual. And the financial repercussions can be significant, impacting private and publicly traded companies, large and small, across all industries.

      This means fraud prevention can no longer be treated as a periodic awareness initiative, something we talk about during Fraud Prevention Month every March and then promptly forget about. It has to become a fundamental, continuous and consistent management and governance priority.

      A new KPMG Canada survey helps clarify the stakes: over the previous 12 months, 72 per cent of respondents had lost up to 5 per cent of business profits to AI-powered attacks, and 94 per cent said they’re concerned about the risk of attacks to come over the next 12 months.

      On top of that:

      Receive insights, events and more to your inbox
      Animated circle statistical graphic showing 56% Graphique statistique en cercle animé montrant 56% 81%

      81% had experienced attempted or successful AI-powered fraud, and 72% of those were targeted more than once.

      Animated circle statistical graphic showing 97% Graphique statistique en cercle animé montrant 97% 60%

      60% had fallen victim to fraudulent email/chat using AI agents or AI-generated content.

      Animated circle statistical graphic showing 56% Graphique statistique en cercle animé montrant 56% 39%

      39% had experienced AI-powered deepfake document fraud.

      Animated circle statistical graphic showing 97% Graphique statistique en cercle animé montrant 97% 24%

      24% were victims of voice clone attacks.

      The fact is, no organization is immune from the risk of AI-powered fraud. It is indeed a new era. Is your organization ready for it?


      The industrialization of fraud

      Until now, fraud activity was constrained by the amount of time, skill and resources needed to perpetrate it. Rapidly proliferating AI has removed these constraints. While organizations invest in AI to detect and prevent fraud, fraudsters are doing the same—often more quickly and less expensively. The result is a threat environment where fraud is no longer simply about stolen credentials or rule evasion, but manufactured legitimacy at scale.
       

      Today’s fraud schemes are:
      • Highly personalized, using AI to tailor attacks to specific individuals or organizations.
      • Impacting all channels and designed to be consistent across them.
      • Adaptive, learning from failed attempts and adjusting in near-real-time.

      This marks a shift from opportunistic fraud to industrialized fraud operations, where fraudsters train AI models to mimic the legitimate behaviour of customers, trusted employees and even executives.

      It also creates a critical organizational challenge: if traditional fraud controls were designed to detect anomalies and AI-powered fraud is designed to hide its nature as anomaly, how do we sort the legitimate from the fraudulent? 


      AI fraud at scale

      AI has transformed many attacks from blunt-force credential stuffing into fast, precision-engineered intrusions. One counterintuitive result is a sharp increase in low-and-slow fraud—attacks that fly under the radar while steadily extracting value. This often leads to a difficult trade-off: tightening controls risks customer friction, while loosening them can invite additional losses. Consider the following examples:


      Synthetic identity fraud

      This type of fraud isn’t new, but AI has made it far more dangerous. Perpetrators now use generative AI to create identities for credit applications, supplier onboarding and systems access that are internally consistent across:

      • Names, addresses and government identifiers.
      • Credit history and transaction behaviours.
      • Digital footprints, including social media presence.

      Criminals then nurture these identities (which AI also allows them to create much more quickly), opening low-risk bank accounts and building credibility over time before rapidly monetizing them through credit facilities, loans or payment fraud.

      The challenge is these identities behave exactly as expected—until they don’t. Organizations often only detect losses months later, well after they’ve cleared onboarding controls.


      Deepfake-enabled social engineering

      One of the fastest-growing threats facing organizations across industries is the use of deepfake audio and video clones to pass basic “liveness” and identity checks in social engineering attacks. Examples include:

      • Voice-cloned calls impersonating CEOs or CFOs requesting urgent wire transfers.
      • Deepfake video used during remote identity verification or employee/customer authentication.
      • Synthetic voices bypassing call center security questions.

      These attacks exploit a fundamental assumption embedded in many control environments: that seeing or hearing is a reliable indicator of authenticity. In an AI-powered world, that assumption no longer holds. 


      Machine identity exploitation

      While deepfakes manipulate human-facing interactions, non-human identities represent another critical dimension of AI-enabled fraud and cyber risk.

      Machine identities—including applications, application programming interfaces (APIs), service accounts, internet-connected devices, bots and AI agents—increasingly outnumber human accounts. They often hold powerful privileges yet remain poorly governed, creating issues such as:

      • Limited visibility into ownership of the identity, what it can access and how it’s used.
      • Long-lived accounts and API keys that stay active long past their original purpose.
      • Weak “secrets hygiene,” such as hard-coded credentials, infrequent rotation and limited monitoring.
      • Early-stage agentic AI that can initiate actions or use credentials autonomously but that lack clear guardrails.

      Attackers target these gaps to gain persistent access to critical systems, exfiltrate data or automate fraud at scale—for example, orchestrating large volumes of payment attempts, automating account testing, or manipulating data that drives financial decisions. 



      Why traditional controls fail

      Many organizations are responding to AI-enabled fraud with incremental enhancements to existing, pre-AI operating models. This often falls short because:

      • Existing controls assume fraudsters are human.
      • Identity was designed for a world where seeing/hearing someone was reliable.
      • Static checks, like point-in-time, single-event verification, fail in a dynamic threat environment.
      • Siloed teams (fraud, cyber, identity) can’t fight a cross-channel threat.

      More to the point, traditional controls are fundamentally reactive, focused on detecting and recovering losses after the fact. Modern, AI‑powered fraud, however, blends into normal activity and bypasses static, event‑based defenses.

      The first step to solving this challenge is recognizing the scope of it: think of fraud prevention as not just a technology issue but also a strategic capability that incorporates governance, talent and accountability.

      Respondents to our survey said their biggest challenges in identifying AI-powered attacks are difficulty distinguishing authentic vs AI-generated content (video, voice, documents), and limited staff expertise in AI-enabled fraud detection. The good news is they’re focusing on addressing these gaps:

      Animated circle statistical graphic showing 56% Graphique statistique en cercle animé montrant 56% 81%

      81% said they conduct employee training on fraud awareness and insider risk every 6-12 months.

      Animated circle statistical graphic showing 97% Graphique statistique en cercle animé montrant 97% 61%

      61% said they have run organization-wide training on AI-powered fraud schemes like deepfakes and voice clones in the past 12 months.

      Animated circle statistical graphic showing 56% Graphique statistique en cercle animé montrant 56% 52%

      52% are using AI-powered fraud defences to combat AI-enabled fraud attacks.

      However, only 26 per cent have actually implemented and tested a comprehensive, formal and written fraud incident response plan that explicitly covers AI-powered attacks.

      Looking forward, 67 per cent of the Canadian companies we surveyed plan to increase their fraud prevention and detection budgets by 1-7 per cent in 2026, compared to 2025, with most planning to allocate these budgets toward:

      • Detection technology
      • Employee training/awareness
      • Transaction controls (e.g., daily transfer limits, dual authentication, identity verification, etc.).


      Effective fraud programs in the age of AI

      Effective fraud programs are now defined less by point-in-time checks and more by continuous, risk-based controls—layered across identity, behaviour, devices and channels—to prevent, detect and disrupt fraud earlier in the lifecycle. Capabilities include:

      • Continuous identity verification

        Focuses on identity as a continuous authentication process that includes:

        • Cryptographic, phishing-resistant authentication, so even convincing deepfakes can’t succeed without possessing underlying cryptographic keys.
        • Device- and session binding that incorporate device reputation, history and integrity checks into risk scoring.
        • AI-enabled detection and anomaly analytics to flag unusual combinations or patterns that suggest scripted or AI-powered attacks.
      • Behavioural analytics and biometrics

        Such as typing cadence; navigation patterns; transaction history; and time, location and channel data, to continuously assess whether the activity is legitimate.

      • Deepfake-aware liveness checks

        That evolve beyond conventional movement-based checks to detect injected media, replay attacks and synthetic imagery or audio.

      • Zero Trust identity flows

        That treat every identity verification request as untrusted, regardless of its origin, and instead continuously verify the identity and trustworthiness of users, devices and applications—before, during and after they’ve accessed network resources.

      • Machine identity governance

        Including assessments to identify high-risk or orphaned accounts and enforce least-privileged access; governance and automated controls to establish clear policies for creating, naming, approving and retiring non-human identities and AI agents; and technical guardrails and policy enforcement to constrain what machine identities and agents can do, where and under what conditions.

      • Cross channel analytics

        (including fraud, cyber and identity) to identify and address coordinated fraud patterns.

      • Human-in-the-loop decisioning

        That integrates human oversight, feedback and expertise to improve accuracy, safety and ethical decision-making for high-impact or ambiguous cases.

      • AI model risk management

        Which ensure that fraud models are explainable, governed and monitored.

      As programs mature, the differentiator becomes how quickly they can convert signals into action. That requires intelligence sharing—first, internally across fraud, cyber, identity and risk teams to spot coordinated patterns; second, externally (where feasible), with trusted partners to improve early warning and reduce blind spots.

      Despite rapid advances in AI, humans remain essential to effective fraud prevention. Because while AI excels at pattern recognition, scale and speed, humans excel at judgement, context and ethical decision-making. In the age of AI, the goal is not to replace human expertise, but to elevate it, allowing skilled professionals to focus on complex, high-risk decisions where nuance matters most.


      Fraud prevention is now a leadership issue

      Fraud in the age of AI is no longer just an operational risk problem. It impacts organizations’ reputations, the trust of their key stakeholders and the experiences of their customers.

      For C-suite executives, boards and risk leaders, the question is no longer whether AI-enabled fraud will impact their organizations, but whether they’re prepared to respond with the same level of sophistication as the threats they face.

      KPMG assists organizations in their preparedness by performing risk assessments to understand the threat landscape, identify high-risk scenarios and assess the existing controls in place to mitigate against those threats. The threat environment is constantly changing, and KPMG’s risk assessments continually adapt to those changes to help organizations be proactive from a prevention perspective.

      As Fraud Prevention Month reminds us, prevention isn’t about eliminating fraud entirely, but building resilient organizations capable of adapting as fast as the risk evolves. In the AI age, trust must be constantly built and safeguarded. 



      About the research

      KPMG Canada surveyed business owners or executive level C-suite decision makers at 251 Canadian companies about the instances of fraud they experienced. The survey took place between February 4-13, 2026, using the Angus Reid Group’s premier business research panel. Fifty-five of the companies surveyed have annual gross revenue between $300 million to $1 billion; 23 per cent have over $1 billion; and 22 per cent have between $50 million to $299.9 million. No respondents under $50 million in annual revenue were included in the survey. Over half (62 per cent) are privately held and 38 per cent are publicly traded. Fifty per cent are based in Ontario, 19 per cent in Alberta, 13 per cent in British Columbia, and 10 per cent in Quebec. The remaining respondents are from other regions across Canada.



      How we can help

      local_hospital

      KPMG in Canada's forensic services team can help you monitor, detect, assess and respond to risks accordingly to protect your company’s reputation.
      search

      KPMG in Canada’s investigations services help prevent, detect and respond to suspected business fraud and misconduct through fraud risk management.
      attach_money

      Prevent, detect, and respond to fraud, money laundering and other financial crimes with confidence.
      shield

      We provide cybersecurity consulting services to help organizations manage and protect against cyberattacks.
      account_balance

      Providing strategic sourcing of internal audit, continuous auditing/monitoring, ERM, governance, and regulatory compliance.
      group

      Your business transformation needs deep expertise, process and technology support to pivot quickly and respond rather than react.

      Insights

      Combat threats by integrating forensic, financial crime and cyber security.

      Insights and strategies to stay ahead of internal and external fraud risks.


      Connect with us

      KPMG. Make the Difference.

      We’re here to help your organization thrive.

      building

      Myriam Duguay

      Partner and National Service Line Leader, Forensic Investigation, Integrity & Dispute Services

      KPMG Canada

      Marilyn Abate

      Partner, Risk Services, Forensic Investigation, Integrity & Dispute Services

      KPMG Canada