Fraud can affect many aspects of your business. While identifying a specific type of fraud reduces the risk of that particular type of fraud occurring, it does not necessarily mean that it will reduce the overall amount of fraud in the long term. To efficiently assess the risk of fraud, the actuarial team at KPMG in Poland has developed a comprehensive process for detecting potential fraud with an emphasis on its economic aspect. The economic aspect includes the estimated amount recovered as a result of identifying the fraud and the analysis of the cost of verifying the fraud.

Benford’s Law and Fraud Detection

In general, fraud analysis is based on searching for certain irregularities. One of the basic approaches is the use of Benford’s law, which describes the frequency of the distribution of the first digit in many real data sets. Benford’s law considers the distribution of observations in a data set and analyses strong deviations from expected frequencies in order to identify suspicious or possibly manipulated data.

This law is used, for instance, in state aid analysis, where the granting of support usually depends on whether the applicant meets certain requirements, such as having income below a certain threshold. In this context, there is a risk of data manipulation due to the intention of meeting the required criterion.

The physicist Frank Benford studied irregularities that can be detected by analysing successive digits in values. Benford statistically analysed the frequency of occurrence of the first digit in natural populations and found that digit 1 occurs as the first digit 30.1% of the time, while digit 9 occurs only 4.6% of the time.

Manipulated or made-up numbers will not be consistent with the expected observations of the frequency of the first digits as expressed by Benford’s law.
If this law is not satisfied, then there is a risk that the analysed data have been manipulated and require more thorough analysis. However, even if the data set complies with Benford’s law, there is still a risk of fraud. Also, a mere deviation from Benford’s law does not necessarily mean that the data has been modified.
 

Fraud Detection Process

In order to efficiently assess the risk of fraud, the actuarial team at KPMG in Poland has developed a comprehensive process for detecting potential fraud with an emphasis on its economic aspect. The economic aspect includes two perspectives at the same time: estimation of the amount recovered as a result of identifying the fraud and analysis of the cost of verifying the fraud.

KPMG’s approach includes descriptive, customer segmentation, community and predictive analysis. We use machine learning methods to identify fraud patterns based on historical data collected within the organisation and external data from a wider context.

The first stage is to define the fraud (i.e. to identify what fraud might mean for the organisation). KPMG’s approach is based on predictive analytics (big data) to support fraud detection. Depending on the context of the organisation and the fraud itself, KPMG analyses current solutions and defines the necessary analytical models. For this purpose, a wide range of data sources are used, with specific methods for data fusion, sampling, visual mining or descriptive statistics.

Before defining the target model, KPMG applies all necessary tools to fully understand the characteristics and limitations of the available data. This approach includes handling missing values, detecting and treating outliers, defining flags, standardising data, categorising variables or weighing evidence. The approach adopted allows the input data to be effectively reduced.

Damages are flagged depending on their characteristics (i.e. they can be binary—fraudulent or not, or continuous—fraud amount). For this purpose, KPMG uses various advanced analytics techniques to build predictive models, such as linear regression, logistic regression, decision trees, neural networks or multi-class classification techniques.

The search for fraud patterns also includes social and geographical contexts. This means that the propensity to cheat is influenced by the social environment or the geographical area. Our approach helps to draw attention to the fact that fraud often depends on the simultaneous occurrence of many factors. Including more factors allows us to better understand the very nature of fraud.

To assess the quality of the model, we determine the prediction performance of the analytical model by first deciding on the partitioning of the dataset and the metrics’ performance.

In the final step, we test the fraud model, the stability of the data included in the model and the model’s calibration. The whole process is implemented while taking into account aspects specific to the organisation with a particular focus on current legislation (e.g. the General Data Protection Regulation). The process is defined in a way that ensures appropriate access to internal and external data to assess fraud estimates. These estimates can be used to calculate both expected and unexpected losses, which then help determine risk margins and capital buffers.

Currently, advanced analytics play a key role in fraud management. Every organisation should identify areas at risk of fraud and implement comprehensive fraud detection and verification processes to effectively and dynamically respond to fraud challenges. The process implemented by KPMG considers every aspect of the organisation at risk of fraud, including the specifics of the business.
 

Author:

Marcin Zabój, Manager, Actuarial services, KPMG in Poland and CEE