January 10, 2019
No Silver Bullets: Q&A on Fighting Digital Banking Fraud
Posted August 28, 2018
As the global banking sector undergoes an unprecedented level of technological transformation, incumbents and innovative new entrants are racing to seize emerging new opportunities and gain a competitive edge in a rapidly changing marketplace.
That comes with serious challenges. But the toughest? Cybercriminals out to monetize stolen identity data in order to take over customer accounts, create fraudulent new accounts, or make illegal payments. By the end of the year, worldwide losses from cyberattacks like these are expected to top as much as $1.6 trillion, a figure that could reach as high as $6 trillion by 2021.
With that as a backdrop, we discuss where the industry stands in the fight against fraud, and which direction things are headed to next.
Q: How are digital banking fraud attack techniques evolving, and are new security technologies keeping up?
A: Recently, digital fraud growth has been driven predominately by social engineering scams and authorised push payments (APP) fraud. Where previously fraudsters would have used their own devices and stolen credentials to commit digital fraud, now they have zeroed in on the weakest link in the customer journey—the customer themselves.
The general feeling in the banking industry is that for digital banking fraud, account takeover versus scams five years ago was approximately 50:50, last year this tipped to approximately 30:70 in favour of scams.
This kind of fraud, unfortunately, is very hard to spot. A transaction can appear to be from the same location as the customer, and would constitute normal behaviour, as there is nothing different about the device, IP address or overall digital identity. Similarly, it can be very hard to detect fraud when remote access software is secretly installed onto a customer’s device by a fraudster, after a social engineering or phishing attack.
However, there’s no doubt that emerging technologies are becoming increasingly intelligent in detecting even subtle deviations from legitimate customer behaviour— for example a fraudster impersonating a customer and making a high value transaction.
New technologies are successfully evolving to factor in an increasing array of behavioural characteristics in real time in order to identify anomalies that indicate fraud- type of payment, value of payment, whether the beneficiary looks like a money mule, time on page, etc. By getting a unified view of all the attributes that come into play, financial services organisations can successfully protect themselves and their consumers.
Q: Is it possible to have a fully autonomous fraud prevention process?
A: Achieving a fraud prevention process that works in real time, is cost-effective, scalable and provides instant decisions, relies on automating as much of the process as possible. However, we are some way off of it being fully-autonomous.
As we adopt increasingly sophisticated machine learning and artificial intelligence technologies, one of the big challenges here lies in how to make AI and ML subjective. When a fraudster attacks a bank’s systems and processes, you’re dealing with a human adversary who could even be using the same tools the bank uses.
When it comes to fraud prevention, ‘black-box’ machine learning models offer no subjective insight into an attack, making it harder to detect patterns that humans would otherwise know to look for, simply from previous experiences.
The alternative approach is ‘clear-box’ machine learning models that are designed to optimize fraud prevention policies and work in tandem with human analysts. This can equip experts with an unprecedented amount of insight and intelligence to assess transactions that the AI has flagged as high risk or in need of further review.
The one place where humans will always play a role—at least for the foreseeable future—is in analysing the data and fraud patterns, in order to evolve defences and creating polices that keep up with new and emerging fraud techniques.
Q: What will the future of fighting fraud look like? For example, is biometrics likely to be a fraud-proof option anytime soon? What else?
A: When it comes to the future of fighting fraud, tools such as biometrics, machine learning and AI will all be key defences for organisations. There is, unfortunately, no silver bullet for fraud.
For example, when it comes to biometrics, fraud will always be found at the point where a device is registered with the bank—a fraudster can potentially take stolen bank credentials and use those details to register their own device.
Organisations have realised that only by analysing each and every customer interaction will they be able to baseline individual customer behaviour and spot anomalies that could indicate fraud.
Unfortunately, false positives or unwarranted customer fraud interventions have become all too common. When you get things wrong, you have a frustrated customer base. When you get things right, everything appears seamless—but only if you have solid fraud controls acting in the background.
Organisations have realised you need to use a wealth of different data points and approaches to analyse transaction risk and verify digital identities in order to maximise threat detection and reduce risk.
The same is true of a ‘Day Zero’ attack—a fraud technique that has not been seen before—organisations are realising that the API route is the only way forward to maintain the kind of agility that is essential for fraud prevention.
ThreatMetrix has been at the forefront of using APIs to help organisations authenticate their customers passively whilst surfacing anomalous behaviour that indicates fraud.
To learn more and see how ThreatMetrix helps Lloyds Banking Group detect high-risk behavior in real time, check out this case study