January 18, 2018
January 17, 2018
January 16, 2018
Posted July 28, 2017
Machine learning sure does generate a lot of hype these days. But can it really live up to its billing in the fight against cybercrime?
Without a doubt, the age of artificial intelligence (AI) and machine learning is heralded as the next big “era” of computing — a fait accompli made inevitable by a PC era that begat the Internet age, which, in turn, spawned today’s anytime, anywhere mobile world.
Corporate investment in machine learning is expected to triple this year — and top $100 billion by 2025. Nearly 30 percent of executives in a recent survey predict it will be the biggest disruptor to their industries in the next five years.
Yet nowhere is machine learning’s potential seen more prodigious than in the world of cybersecurity.
The typical organization loses 5 percent of revenue each year to fraud. According to Juniper Research, that will translate into $8 trillion in losses worldwide over the next five years.
By building models based on historical transactions and establishing networked intelligence across a spectrum of data sources, machine learning algorithms can spot anomalies that might signal new or existing forms of fraud in real time.
Better, Faster, Smarter, Safer
At its most essential, machine learning is a form of AI in which machines are given access to data and the ability to learn on their own. While it might sound futuristic, the concept isn’t exactly new.
It was 1950 when computing pioneer Alan Turing first posed the question, “Can machines think?”
What became known as the Turing Test stipulates that the only sure way to confirm true intelligence in machines—the ability to learn and apply knowledge—is to be unable to distinguish between a machine and another human being based on responses to questions you put to them.
On that measure, machine learning doesn’t meet Turing’s criteria quite yet. But the strides machine learning is making still matter — a lot.
Many of today’s digital consumers would rather interact with machine learning-enabled chatbots than live human customer service reps. And machine learning routinely sifts through thousands of job applications to shortlist candidates with the best experience for achieving success at the associated company.
Indeed, from supply chain management, to CRM, to finance, to advertising, to conversational commerce and more, machine learning is automating, streamlining and, over time, enhancing a plethora of business processes.
Yet while these factors are easily applicable, the primary reason it’s safe to say that machine learning is the future of cybersecurity is because it’s also the future of cybercrime itself.
The New ‘Imitation Game’
Armed with a mountain of personal identity data stolen through corporate data breaches, cybercriminals will soon begin deploying “malicious machine learning” algorithms that make it easier than ever for imposters to take over customer accounts or create fraudulent new ones.
The machine learning tools needed to perform complex analysis for target selection in, let’s say, phishing attacks are already available—promising a disturbing new authenticity in fraudulent email.
Using machine learning, attacks can be timed to a victim’s upcoming travel as ascertained by social media posts, or mimic the language used by people in the victim’s address book or online interactions to get the tone of phishing messages just right.
Malicious machine learning could also be targeted at chatbots used for customer service and social media, as well as to phone calls via a new generation of voice bots.
Experts also predict machine learning could even be used to formulate new malware or ransomware designed to stay one step ahead of detection. It could change the nature of cybercrime itself. Instead of stealing corporate data, for instance, machine learning could be used to simply alter it, wreaking havoc within enterprise systems.
Machine Learning in the Here and Now
Companies in payments, banking, retail and other industries are increasingly using machine learning as part of multi-layered, digital identity-based systems, such as those from ThreatMetrix, designed to provide the kind of cybersecurity necessary to thwart the increasingly sophisticated attacks on the horizon.
ThreatMetrix combines real-time actionable data from our Digital Identity Network with an organization’s data to generate rules and attributes that are optimal to solve that company’s problem at hand.
This clear-box approach provides insight into why the machine thinks the information is good or bad. Businesses can better understand their customers and use this information to influence other business decisions, while also increasing the efficiency of their fraud and risk teams.
This method is getting recognition from analysts, including Gartner analyst Avivah Litan, who advises security vendors to incorporate such an approach in her latest blog.
The Race is On
Machine learning for cybersecurity is a revolution that’s picking up speed. Nearly 40 percent of C-level executives indicate machine learning-enabled systems will be a primary means of managing cybersecurity.
Which means the real question isn’t whether machine learning is the future in cybersecurity. It’s whether organizations are prepared to keep up with — and hopefully shutdown — cybercriminals in the new machine learning arms race.