Artificial intelligence (AI) has become an important tool in preventing and detecting fraud, offering faster and more accurate methods. With the use of AI, large volumes of data can be analyzed and therein patterns may be identified that otherwise would go unnoticed by human eyes. Large datasets can be explored to extract implicit, previously unknown information that is potentially useful, enabling organizations to enhance their resilience against risks such as financial fraud and identity theft. However, while AI holds significant potential, its implementation and usage comes with substantial responsibilities. Therefore, ensuring ethical use and compliance with regulations, such as the European AI Act, is essential.
In our blog series ‘AI & Forensics’, we covered how Fraud Data Analytics (FDA) enhances detection and prevention, and how generative AI (GenAI) is transforming money laundering and fraud risk management. We also explored its role in fraud investigations and how AI-generated content may be identified in the prevention of misuse. Additionally, we discussed the importance of model management and monitoring AI-powered compliance models.
In this final blog, we focus on compliance and the ethical implications of using AI in fraud prevention and detection, which is essential to prevent unintended harm caused by misuse of, or bias in, AI systems. For instance, the SyRI system in the Netherlands, which was designed to detect various types of fraud, faced significant criticism for profiling certain communities and raised privacy and discrimination concerns in algorithmic decision-making. This ultimately led to a court ruling, declaring the systems functionality a violation of human rights. Similarly, other AI-powered government fraud detection models have faced scrutiny in relation to transparency and fairness. This highlights the importance of responsible AI practices that safeguard individual rights and maintain public trust.
Compliance considerations
When incorporating AI into fraud prevention and detection strategies, organizations must comply with regulatory requirements. Transparency, fairness and accountability must be foundational to the development and deployment of AI models. The European AI Act (AI Act), which came into force in August 2024, sets out a framework that ensures AI systems are used safe and that they uphold fundamental rights. The AI Act classifies AI systems into four categories based on their risk level: (1) unacceptable risk (prohibited AI systems), (2) high risk, (3) limited risk, and (4) minimal or no risk. The primary focus is on consumer protection, thereby imposing stricter requirements for high-risk AI systems.
For high-risk AI systems, organizations must meet specific obligations, such as maintaining thorough documentation that outlines the decision-making processes of their AI models and conduct regular validations to detect and mitigate potential biases. These requirements are critical to ensure fairness and reliability, particularly in areas like fraud detection, when inaccurate or biased outcomes could lead to significant harm.
Moreover, AI systems in fraud prevention and detection must comply with the General Data Protection Regulation (GDPR), which ensures that organizations handle personal data responsibly and safeguard individuals’ privacy rights. As AI continues to evolve, organizations must go beyond merely meeting compliance requirements, taking proactive steps in addressing ethical concerns and ensuring their systems are fair, effective, and aligned with both legal and societal expectations.
Ethical implications
In the context of fraud prevention and detection, AI introduces significant ethical considerations that must be addressed to avoid unintended harm. It’s essential that organizations prioritize ethical data practices, ensuring transparency and fairness in data collection, processing, and usage. This includes the use of diverse and representative datasets to train models, along with regular bias assessments to minimize the risk of discriminatory outcomes.
The EU AI Act addresses many of these ethical concerns, particularly regarding high-risk AI systems, by enforcing a risk management system to mitigate issues like algorithmic bias. AI models trained on historical data can unintentionally reinforce existing inequalities, leading to biased decisions that may violate human rights. Transparency requirements deal with the lack of transparency in AI models, supporting public trust and accountability in decision-making processes.
However, we believe that legislation alone is not sufficient. Ethical considerations must go beyond what is legally permitted and challenge us to think about the moral implications of AI systems. It’s not only about compliance but what society considers fair, right, and acceptable. Given the emerging use of AI systems, it is important to raise ethical questions that balance innovation and responsibility, ensuring that AI systems are tested for potential risks to the organization itself as well.
The Responsible AI Framework
To harness the full potential of AI while ensuring its ethical deployment, the Responsible AI Frameworks offers guidelines to do so. This framework is designed to guide organizations in using AI systems in a way that is trustworthy, safe, and free from bias, while maximizing the benefits AI can bring. The framework spans across the entire AI lifecycle – from design and development to deployment and ongoing use – to ensure that responsible practices are integrated at every stage.
The Responsible AI Framework consists of ten ethical pillars, which helps in safely and responsibly unlocking the value from AI:
- Fairness: AI models must reduce or eliminate biases against individuals, communities or groups, ensuring equitable outcomes for all.
- Transparency: Responsible disclosure is essential to provide stakeholders a clear understanding as to what is happening within the AI system and across the entire AI lifecycle.
- Explainability: AI systems must be understandable, with clear explanations of how and why recommendations are made or conclusions are drawn, fostering trust and clarity.
- Accountability: Human oversight and responsibility must be embedded across the AI lifecycle to manage risk and ensure compliance with regulations and applicable laws.
- Security: For the protection of both data and infrastructure, AI systems must be safeguarded against unauthorized access, bad actors, misinformation, corruption, or attacks.
- Privacy: Compliance with data privacy regulations is crucial, ensuring that consumer data is used responsibly and privacy rights are preserved.
- Sustainability: AI systems should be optimized to limit negative environmental impact, contributing to a sustainable future.
- Data integrity: High-quality, well-governed data is the foundation of trustworthy AI. Ensuring proper data handling, enrichment, and governance are crucial steps to embed trust.
- Reliability: AI systems must consistently perform at the desired level of precision.
- Safety: AI systems should be designed to minimize potential harm to humans, property, or the environment.
By adhering to these ten pillars, it is possible to develop AI systems in fraud prevention and detection that are not only effective but also ethical and compliant, fostering trust and minimizing risk for all the stakeholders involved. KPMG helps in establishing and implementing a responsible AI strategy and governance model that focuses on ethical design and deployment.
At KPMG Forensic, we recognize that a responsible application of AI is essential for effective and fair fraud prevention. Our team provides services to support organizations in leveraging AI to enhance their fraud prevention and detection strategies, ensuring alignment with legal and ethical standards. Want to know more? Visit the KPMG Forensic website via Forensic Services - KPMG Nederland or contact us directly.