The rise of artificial intelligence (AI) in business operations introduces complex legal and ethical challenges – particularly when AI systems act in ways that could result in fraud. A pressing question emerges: if AI commits fraud that benefits an organisation, can the organisation be prosecuted under the UK’s Failure to Prevent Fraud (FtPF) offence?
Understanding the failure to prevent fraud offence
The Economic Crime and Corporate Transparency Act 2023 introduced the FtPF offence in the UK. This legislation holds large organisations criminally liable if they fail to prevent employees, agents, or associated persons from committing fraud for the organisation’s benefit. The offence applies to companies meeting certain size thresholds and aims to encourage robust counter-fraud procedures.
Crucially, liability arises even if senior management was unaware of the fraudulent act. The only defence available is that the organisation had “reasonable procedures” in place to prevent the fraud from happening.
Where does AI fit in?
AI systems, particularly those with autonomous decision-making capabilities, can execute actions that resemble fraudulent behaviour – such as manipulating financial data or misrepresenting information which could help secure contracts. However, an essential element of fraud is dishonest intent, and it is clearly not possible for AI to meet this requirement given that it isn’t an employee, agent or associated person.
This raises a critical distinction: AI cannot be prosecuted, but the organisation deploying it could face scrutiny for not having reasonable prevention procedures in place. If an AI system commits fraud and the organisation benefits, regulators may argue that the company failed to implement adequate safeguards (e.g. good governance or humans in-the-loop to oversee the AI models and performance).
The EU AI Act: Context and compliance
The EU AI Act (Regulation (EU) 2024/1689), adopted in 2024, classifies AI systems by risk and imposes strict obligations on high-risk applications, including transparency, human oversight, and accountability. Organisations must ensure that AI systems are designed and monitored to prevent unlawful outcomes, including carrying out risk assessments which is also a requirement under the FtPF offence. While this Act does not create criminal liability for AI actions, it reinforces the principle that humans remain responsible for AI-driven decisions.
Implications for organisations
The convergence of these frameworks means organisations cannot hide behind the argument that “the AI did it.” Regulators will likely examine whether the organisation had:
Failure to implement these measures could expose organisations to liability under the FtPF offence, even if the fraudulent act was executed by an algorithm.
Key takeaway
AI cannot have intent – but organisations can. Deploying AI without adequate oversight, and reasonable procedures, is a legal and reputational risk. To mitigate this, businesses must embed human accountability into AI governance, ensuring that technology serves as a tool, not an unmonitored actor. In the age of intelligent systems, responsibility remains human.
For more on fraud risk management and technology, contact Annabel Reoch and Ethan Salathiel.
Our advisory insights
Something went wrong
Oops!! Something went wrong, please try again
Our people
Get in touch
Discover why organisations across the UK trust KPMG to make the difference and how we can help you to do the same.