AI threatens to add to the growing wave of fraud but is also helping tackle it
- Written by Laurence Jones, Lecturer in Finance, Bangor University
There were 4.5 million[1] reported incidents of fraud in the UK in 2021/22, up 25% on the year before. It is a growing problem which costs billions of pounds every year.
The COVID pandemic and the cost of living crisis have created ideal conditions[2] for fraudsters to exploit the vulnerability and desperation of many households and businesses. And with the use of AI increasing in general, we will likely see a further increase in new types of fraud[3] and is probably contributing to the increased frequency of fraud we are seeing today.
Already, the ability of AI to absorb personal data, such as emails, photographs, videos and voice recordings[4] to imitate people is proving to be a new and unprecedented challenge.
But there is also an upside. The government, banks and other financial organisations are now fighting back with increasingly sophisticated fraud-detection methods. AI and machine learning models could be a part of the solution[5] to deal with the increasing complexity, sophistication and prevalence of such scams.
The rising gap between prices and people’s incomes appears to have made people more receptive[6] to scams which offer grants, rebates and support payments.
Fraudsters often target individuals by posing as genuine organisations. Examples include pretending to be your bank or posing as the government telling you that you are eligible for a lucrative scheme, in order to steal your identity details and then money.
This follows a dramatic rise in recent years of fraudulent applications to government and regional support packages, mainly implemented in response to the pandemic. Here fraudsters often pose as fake businesses to secure multiple loans or grants.
One of the most outlandish examples[7] of this was a Luton man who posed as a Greggs bakery to swindle three local authorities in England out of almost £200,000 worth of COVID small business grants.
The hurried roll out of such schemes for faster economic impact made it difficult for officials to effectively review applications. The UK government’s Department for Business and Trade now estimates[8] that 11% of such loans, roughly £5 billion, were fraudulent. By March 2022 only £762 million had been recovered[9].
Fraud detection
Over the past few years, complex mathematical models combining traditional statistical techniques and machine learning analysis have shown promise in the early detection[10] of financial statement fraud. This is when companies typically misrepresent or deceive investors into believing they are more profitable than they really are.
One of the breakthroughs has been the incorporation of both financial and non-financial information into data analysis systems. For example, the risk of fraud decreases if there is better corporate governance[11] and a lower proportion of directors who are also executives.
In a small business context, we can think about this as promoting transparency and making sure that important positions do not have sole authority to make significant decisions.
Such data analytics models can be used to rank applications in terms of potential fraud risk, so that the riskiest applications get additional scrutiny by government officials. We are now starting to see implementations of such systems to tackle universal credit[12] fraud, for example.
Banks, financial services providers[13] and insurers[14] are developing machine-learning models to detect financial fraud too. A Bank of England survey published in October 2022 revealed[15] that 72% of financial services firms are already testing and implementing them.
We are also seeing new collaborations in the industry, with the likes of Deutsche Bank partnering with chip maker Nvidia to embed AI[16] into their fraud detection systems.
Risks of AI systems
However, the advent of new automated AI systems bring with it worries of potential unintended biases within them. In a recent trial[17] of a new AI fraud detection system by the Department of Work and Pensions, campaign groups were worried about potential biases.
A common issue that needs to be overcome with such systems is that they work for the majority of people, but are often biased against minority groups. This means if left unadjusted they are disproportionately more likely to flag applications from ethnic minorities as risky.
Read more: Scams, deepfake porn and romance bots: advanced AI is exciting, but incredibly dangerous in criminals' hands[18]
But AI systems should not be used as a fully automated process to detect and accuse fraud but rather as a tool[19] to assist assessors. They can help auditors and civil servants, for example, to identify cases where greater scrutiny is required and to reduce processing time.
References
- ^ 4.5 million (www.ons.gov.uk)
- ^ ideal conditions (www.bbc.co.uk)
- ^ new types of fraud (www2.deloitte.com)
- ^ voice recordings (www.cbsnews.com)
- ^ part of the solution (www.weforum.org)
- ^ receptive (www.citizensadvice.org.uk)
- ^ most outlandish examples (www.manchestereveningnews.co.uk)
- ^ estimates (www.bbc.co.uk)
- ^ had been recovered (www.gov.uk)
- ^ early detection (onlinelibrary.wiley.com)
- ^ better corporate governance (onlinelibrary.wiley.com)
- ^ universal credit (www.theguardian.com)
- ^ Banks, financial services providers (www.ft.com)
- ^ insurers (www.ft.com)
- ^ revealed (www.bankofengland.co.uk)
- ^ embed AI (www.db.com)
- ^ recent trial (www.bbc.co.uk)
- ^ Scams, deepfake porn and romance bots: advanced AI is exciting, but incredibly dangerous in criminals' hands (theconversation.com)
- ^ as a tool (www.ft.com)