April 20, 2018
April 18, 2018
Posted August 28, 2017
Machine learning has been gaining a lot of momentum lately, as security and risk professionals see it as a valuable tool in their cybersecurity arsenal. However, this wave of enthusiasm might recede as quickly as it arrived.
That could be, in part, because of a perception among corporate security officials around the lack of actual machine learning in some solutions.
According to Forbes, security and risk professionals fear that some vendors claiming to use machine learning are not really doing so, or are doing so at a level far below what they are promising. A growing concern is that intricate concepts, such as machine learning, are acting as a smokescreen for a cybersecurity product’s shortcoming.
This black-box approach leaves fraud analysts in the dark with no ability to explore the heart of a solution, and could hamper the adoption of machine learning as a solution in the ongoing battle against cybercrime.
The reason: Despite their ever-increasing sophistication, these machine learning systems still need a human touch to achieve peak performance.
The enormous growth of online transactions has made the use of machine learning by fraud and risk professionals a much more efficient, and economical, process. Corporate budgets for hiring the necessary number of experts to review and analyze this vast amount of transactions by hand can’t scale at anywhere close to the pace of online transactions. And the frustrated customers and lost business resulting from the relative snail’s pace of these reviews could cause irreparable harm to the business.
By building models based on historical transactions and establishing networked intelligence across a spectrum of data sources, machine learning algorithms can spot anomalies that might signal new or existing forms of fraud in real time.
However, even the most sophisticated machine learning systems need to compensate for the fact that they’re trained by humans. And, to assure that these systems are spotting true fraud instead of falsely identifying actual customers, their decisions are going to be evaluated by humans – be they an application owner, a business leader, or an end customer.
Machine learning systems analyze a myriad of signals — all the pieces of data collected on a transaction, including the age of an IP address, the device location, the average reputation of an email, and hundreds more. While some of those signals are linear — containing data that exists overtime — some are not.
Without machine learning, the trained eye of an expert would be needed to find the connection between these diverse types of signals. But, as cybercriminals become more sophisticated, they can launch attacks at ever-increasing scales — scales that can’t be matched by organizations using experts to classify good and bad interactions.
Security and risk professionals have identified a natural progression in their efforts – a move to supervised machine learning. Unfortunately, solutions using a black-box model can’t be optimized to approach the effectiveness of an expert.
In her blog, Gartner analyst Avivah Litan asserts that security vendors incorporate a clear-box approach in their solutions.
This approach is based on the notion that the most accurate systems have adjustable rules that allow it to look for the correct signals, have the best data, and can clearly express the “why” behind the decisions.
Sharing all signal information with customers transparently shows businesses why a specific decision was taken by the system, providing insight into why the machine thinks the information is good or bad. Businesses gain a better understanding of their customers, as well as valuable information they can use to influence other business decisions. This hands-on approach to machine learning gives customers the ability to optimize to a deeper level and receive a clear view into the reasoning baked into their custom-fit model.
Without this expertise and information, even the most sophisticated machine learning teams can miss out on signals.
When we engage with a potential customer, we combine real-time actionable data from our Digital Identity Network with an organization’s truth data (data collected from their own transactions), and build this into real-time decision strategies. It is a clear and powerful demonstration of the effectiveness of our clear-box approach.
However, one potential customer was so confident in its own unsupervised machine learning system, it only wanted to use our data and apply its machine learning to see the results. When it did, it claimed only an “incremental uplift” to its own results.
Then, our applied machine learning team identified areas of incorrect interpretations, provided direction on new signals to look for and transformed these insights into an optimized real-time decision model.
With this expert guidance and a new set of rules, another run-through resulted in the customer’s system catching more than 140 times the number of fraudulent transactions. After an impressive showing like that, we no longer refer to them as a potential customer.
While our clear-box approach differentiates us from others, so does our data.
The ThreatMetrix Digital Identity Network collects and processes global shared intelligence from millions of daily consumer interactions, including logins, payments and new account applications. It recognizes behavior and identities across 4.5 billion unique devices from 1.4 billion anonymized users worldwide.
Potential customers don’t have this data. The reach of our data allows us to, for example, identify new devices that are unrecognizable to other businesses.
Businesses can have the most sophisticated machine learning systems and teams available, but our data is one of a kind. And we have experts who know how to work with it.
There is just no substitute for that.
Mr. Welch has extensive experience in artificial intelligence, including five years as Vice President of Sales and Service at Inference (acquired by HP) and later Brightware (acquired by Firepond).