Why Pay-By-Selfie Won’t Fool the Fraudsters
Posted July 22, 2015
MasterCard recently unveiled trials of a new facial biometric authentication system for online purchases. This new “pay-by-selfie” service garnered plenty of headlines, but will it actually be able to help prevent fraud? Without wanting to dismiss any new initiative designed to combat online criminals, I fear it’s the wrong approach.
Caught on camera
The pilot itself seems straightforward enough. Users are prompted on check-out to hold their phone up and take a shot of their face, which is then either given the green or red light by facial recognition software. MasterCard has already partnered with major smartphone ecosystem providers and is apparently working on deals with some banks.
While MasterCard is obviously trying to cash in on the current craze for taking “selfies” and therefore market the service as a more user-friendly alternative to other two-factor authentication systems, I’d argued that it’s actually not that low friction at all. And this could have significant implications for TCO and user take-up.
High friction, low security
The success of biometrics are predicated on their being very low friction and working in a highly secure, highly reliable manner. My fear is that technology might block legitimate transactions because the user has a temporary facial disfigurement, for example, or is holding the camera at slightly the wrong angle. Then just think about taking selfies. It’s not really a low friction task – in fact, walk to any major landmark in the centre of London and watch how long it takes a user to frame and then take that perfect shot of themselves.
Add friction and unreliability and you’re on the back foot straightaway when it comes to fraud prevention. If users feel it’s too much hassle they’ll drop out of the checkout. In fact, the cost associated with lost sales from dropouts and incorrectly blocking legitimate transactions can soon mount up to exceed predicted fraud losses. Go too far the other way, however, and you could risk letting the fraudsters through.
We all know cybercriminals are adept at probing away to find the holes in every new security or authentication system. With Apple Pay, fraud losses in the US rose not because there was anything wrong with the Touch ID, or Apple’s tokenisation set-up, but because the banks were not authenticating properly during new card provisioning. Similarly, we know that some facial biometrics systems have already been exposed as open to spoofing by using photos.
Even if new systems come along which claim to be more secure, there will always be the risk of a user’s picture being stolen, hacked or spoofed. In fact, it might be even easier to do this than to steal a password – just go to their Facebook profile page.
A better way
The fundamental problem with facial recognition technology, and biometrics in general, is that there are significant security holes in all variations. And once a hack has been found, the user cannot simply change their unique biometric identifier – be it face, voice, fingerprint etc – as if it were a password.
At ThreatMetrix® we are no fans of the status quo either. Passwords have been proven time and again as an outdated and ill equipped approach to protect user accounts. A tried and tested approach that continues to work incredibly well and creates zero friction is anonymous behavioural analytics.
At ThreatMetrix we don’t know who you are, we have no photos of your face, we don’t even know your email address. What we do know is whether what you are doing now is consistent with what you’ve done in the past and consistent with normal (ie non-criminal) activity. This knowledge helps us protect your accounts from takeover and from criminals signing up for new services in your name.
But it also does away with annoying challenge questions, security tokens and having to take selfies just to buy online.