By using AI-based systems and solutions, companies hope to achieve higher economic performance through automation, more efficient processes and savings through the targeted deployment of employees.

On 21 May 2024, the European Union adopted the EU AI Act, which will apply to the use of artificial intelligence in the future and establishes a legal framework for the safe use of the technology. Companies must take this into account when using AI solutions.

However, there is also a threatening downside to this technological development: white-collar criminals are also taking advantage of artificial intelligence.

The possibilities of GenAI

The potential uses of generative AI have led to a wide range of applications. They range from interactive assistance systems such as ChatGPT to the generation of digital media content, for example via SORA - an artificial intelligence that creates video clips in photorealistic quality based on short text descriptions from users. It is now not only possible to create texts that are coherent in terms of content and semantics. It is also possible to create images and documents and imitate audio and video content. Such images and video content can also be used specifically by white-collar criminals. Familiar fraud patterns can thus be identified on a new scale and with new facets.

Impact eCrime

The forms of AI-generated fraud are complex. They range from customised social engineering strategies and the fraudulent use of AI-generated, context-related information to identity forgery of unprecedented quality.

Social Engineering 2.0

Social engineering attacks already pose a risk to companies. Using phishing emails, fake phone calls or manipulated websites, attackers try to exploit human weaknesses and behaviour to obtain confidential information or gain unauthorised access to systems. As well as often causing considerable financial damage, this also leads to a loss of trust and reputation. In the past, social engineering attacks often involved a great deal of research for white-collar criminals, but the use of AI now offers new possibilities: AI can be trained with emails from real people or companies in order to then create and send texts with authentic-looking content and in the appropriate writing style for the relevant people or companies. In addition, AI is now able to manipulate documents and images, imitate voices and mimic people for video conferences. Social engineering attacks are therefore becoming an even more complex threat that is no longer only carried out via email, but is also technically possible via telephone or even video calls. The first cases of this are already known. For example, a British company recently reported losses totalling 23 million euros after an employee made a video call with the company's supposed CFO. It turned out that the CFO was a clone created by Deepfake, which was used for this so-called "fake president" fraud (CEO fraud).

How contextual information becomes a danger

Contextual information about the performance or development of a company can have a relevant impact on the capital market and therefore on the share price of listed companies. White-collar criminals use this information to manipulate the capital market. The aim is to illegally make a profit when buying or selling shares.

Until now, the manipulation of share prices via false reports involved a certain amount of effort: Background information on the company concerned had to be researched, a coherent text formulated in this context and published on one or more platforms. Today, AI enables the simple and automated creation of such contextualised texts. An AI can, for example, create a large number of false reports which, due to their sheer volume, lead readers to regard the information as credible and take corresponding action on the capital market. The difficulty in recognising false reports is exacerbated by the scalability and precision with which AI creates these texts. This makes manipulation easier and more efficient than ever before.

Business with fake customers

Companies are already confronted with fake customers who use fake identities and fake proof or websites and can no longer be found - for example after ordering goods. Upon investigation, it turns out that this customer does not exist at all - even though they have legitimised themselves with credible websites, documents that were supposedly signed by official bodies and references to board members as well as profiles on several social media sites. This leads to a significant risk for companies in terms of payment defaults and money laundering. Recognising forged documents as part of the customer onboarding process is already often difficult and is likely to become even more challenging in the future with the use of artificial intelligence. This is because AI plays a decisive role in the creation of fake copies of ID cards, proof of identity, certifications (qualifications or licences), CVs, social media profiles or websites. Modern technologies make it possible to create deceptively real-looking documents and profiles in the shortest possible time, which the human eye can no longer distinguish from genuine documents and profiles.

Opportunities and measures for dealing with AI-supported fraud

Artificial intelligence takes fraud to a new level: the low labour, time and cost involved makes the use of AI attractive to white-collar criminals. For companies, this leads to an additional threat situation that will continue to evolve. Every company should be aware of this situation. It is necessary to derive customised measures. In addition to clear guidelines for passing on information (including internally) via telephone, email and other communication channels, employees also need to receive appropriate training on current developments and threat scenarios.

In addition to the risks mentioned, artificial intelligence also offers new opportunities to counteract the activities of white-collar criminals. By using AI themselves, companies can recognise potential threats at an early stage and respond to them. For example, AI-based systems can identify unusual patterns in data traffic and thus indicate possible fraud attempts. Software solutions that detect anomalies in communication behaviour and issue corresponding warnings can also be used to provide support. AI can also help to verify the authenticity of documents and identities by performing complex identity checks and ensuring data integrity.

Between opportunity and risk

Companies should therefore address both sides and take proactive measures to continuously improve their security strategies and thus arm themselves against the increasing threats posed by AI-supported fraud patterns.

Für weiterführende Informationen und passgenaue Lösungen für Ihre individuellen Bedürfnisse stehen Ihnen die Expertinnen und Experten von KPMG gerne zur Verfügung.

Webcast

Further interesting Insights for you