March 27, 2019
Phantom Menace: Cybersecurity and the Bot Revolution
Posted June 21, 2017
This is the first in a three-part series about bots.
A Bot-pocalypse might be closer than you think.
In her 2017 Internet Trends Report, Mary Meeker points out that, after a brief decline, traffic generated from bots has once again overtaken human-generated Internet traffic. And unfortunately, most of them are up to no good.
From January through March, there was a 180-percent increase in bot attacks targeting financial services firms, according to the Q1 2017 Cybercrime Report from ThreatMetrix.
These bots—let’s call them “FinBots”—are primarily designed to mass-test stolen identity credentials to break into customer accounts. Overall, financial services companies see four times the bot attacks as other industries.
And earlier this month, word hit that the malware known as Fireball has now infected 250 million computers and 20 percent of all corporate networks worldwide. That particular piece of code redirects search traffic to plant tracking pixels that collect sensitive personal data.
Once Fireball has infected a machine, it can be remotely controlled to pull in other malware. Indeed, it can even be used to turn infected machines into botnets that could be worse than last year’s Mirai denial of service (DDoS) or May’s WannaCry attacks. At this writing, 10 percent of all Fireball-infected networks are in the U.S.
It’s all enough to have Dr. Sandro Gaycken, head of NATO’s cyber defense project, expressing fears that bot-driven ransomware could accidentally cause computers controlling a nuclear arsenal to crash or behave unpredictably.
“We could have a situation where up to 3,000 nuclear missiles are affected by one attack,” he said.
Bots, of course, are small applications that perform automated tasks. Some might set your thermostat, give you quotes on a loan, settle an insurance claim, play tricks on irksome telemarketers, enable you to apply to thousands of jobs at once, or order your favorite pizza for home delivery every Wednesday night. Siri and Alexa are bots at the forefront of voice-based digital services.
Sometimes, however, bots have more nefarious purposes. In some DDoS attacks, “botnets” of hijacked “zombie” computers are used to flood a website with traffic. Bots were recently used to post 17,000 anti-net neutrality comments on the FCC site in hope of influencing public opinion, crashing the system for hours.
They’re also often used for loan stacking, which targets online lenders by using stolen identity data to apply for multiple loans at once. Sometimes they test login credentials to take over user accounts, or to deliver ransomware or other malicious code.
Indeed, 80 percent of all online fraud attempts now involve bots, and nearly half of businesses report they were the target of attacks in just the past year.
While bots are generating a lot of buzz these days, they’ve actually been around for more than half a century — the first was developed in 1966. But with the advent of artificial intelligence and other technologies, they’re becoming mainstream for cybercriminals of every ilk—from international crime rings to even coding-illiterate thugs.
Using artificial intelligence (AI), advanced hackers can attack the underlying infrastructure or application framework of brand chatbots—accessing the same sensitive data that bots use now to facilitate transactions or manage banking accounts. In part two of this series, we’ll look at how they might also deploy “imposter bots” to fool prospects and customers with fake news or phishing expeditions.
Especially lucrative to hackers: developing ready-made, bots-as-a-service (BaaS) offerings that they can sell to even the lowest-skilled thieves for $20 to $30 a month. That means you can expect bot-based loan stacking and credentials testing to skyrocket. In fact, “now any idiot and their dog can set up a Mirai botnet,” warns Marcus Hutchins, a 22-year-old cybersecurity analyst who helped shut down WannaCry.
Fighting back isn’t easy—but it also not impossible.
Joining the Counter Revolution
With an endless stream of compromised identity credentials available to cyber-thieves on the dark web, organizations in every industry are finding outdated authentication systems that use these credentials no longer suffice in the battle against the bots.
Instead, many are turning to digital identity-based authentication that combines behavioral analytics, advanced machine learning and global, crowdsourced threat intelligence to stop bots by detecting:
- Compromised devices or connections infected with malware, signs of identity or location spoofing, and mechanisms used to mass-test identities.
- Contextual anomalies that deviate from established patterns between users and their devices, networks, behaviors, locations and hundreds of other dynamic data elements.
- Unusual traffic patterns—large numbers of access attempts at a time, or “low-and-slow” attacks that don’t align with the credentials being used in these access attempts.
In some industries, companies that have deployed these solutions report they’ve been able to block more than 90 percent of all bot traffic, and cut overall bot-based access attempts by 50 percent—without negatively impacting the user experience.
With the stakes represented by the rise of the bots so high, many other organizations will no doubt join the fight.
Here’s hoping they’ll help keep a full-on Bot-pocalypse of any kind at bay.
To learn more, read an exclusive solution brief on securing your applications from bot attacks, here.