- ThreatMetrix Announces $30 Million in Growth Capital with Silicon Valley Bank to Accelerate Global Market Eminence
- ThreatMetrix Highlights Influential Moments at 2016 Digital Identity Summit, Business Without Borders
- ThreatMetrix Prevents Over $15 Billion in Annual Fraud Loss
- ThreatMetrix Fall ‘16 Release Secures the Future of Global Digital Business
- ThreatMetrix Announces Accelerate Partner Program to Advance Channel Sales and Service Opportunities
University of California Santa Barbara (UCSB) security researchers found that banks of computers using the same operating system configurations can detect changes made by rootkits — malicious software, designed to hide processes and programs from detection so the bad guys can continue accessing an infected system. The 2010 Stuxnet attack used a kernel-level rootkit to remain on infected machines even when they were restarted or reimaged.
Using a method they nicknamed Blacksheep, researchers monitored kernel memory dumps from a large number of computer systems. A kernel is the main component of most computer operating systems; a bridge between applications and the actual data processing done at the hardware level. What the researchers looked for were changes that could indicate a system had been compromised.
Darkreading.com quotes Christopher Kruegel, an associate professor in the Department of Computer Science at UCSB and a co-author of the research paper on the subject as saying Blacksheep requires no signatures or foreknowledge of an attacker’s code and could help companies detect attacks that would otherwise go unnoticed. “We are not solving the general malware problem, but against the important crop of kernel-level rootkits and kernel-level modifications and exploits, it is a very powerful and very robust and general tool.”
In a presentation for the Association for Computing Machinery (ACM) Conference on Computer and Communications Security, researchers demonstrated that in a cloud provider’s virtual machine network, Blacksheep worked extremely well. However, it had significant challenges to overcome in a real-world employee-workstation network.
Giovanni Vigna, a co-author of the research paper and professor in the Department of Computer Science at UCSB, observed that Blacksheep’s “usefulness comes from the fact that it is not based on signatures and (is) not based on the behavior of a piece of software. It’s just based on the fact that, hey, all these machines should have a very similar configuration in the kernel, so if somebody is an outlier (an observation that is numerically distant from the rest of the data) — it might not be a compromise, maybe it is a malfunction of some sort — but it’s something that should be looked at.”
In the darkreading.com piece, Vigna told darkreading contributing writer, Robert Lemos how Blacksheep compared memory dumps from each monitored system, first creating lists of kernel memory modules that were then sorted and compared, calculating the distance that each list of modules was from the others
The system then compared each byte of a modules’ code with other systems to find differences that could indicate changes inserted by a rootkit. Blacksheep also conducted memory crawling to catch changes to kernel data and checked five different kernel entry points for signs of changes.
The system detected all incidents of kernel rootkit infection on 40 virtual machines running Windows 7 with no false-positives. In a second test using physical systems installed with Windows XP, Blacksheep detected 75 percent of the rootkits with a 5.5 percent false-positive rate. The false-positives were due to the fact that real-world collection of memory dumps takes time, during which the kernel memory can change or become inconsistent. Added Vigna, “Those inconsistencies show(ed) up as inconsistencies in our model.”
Other security experts questioned how well Blacksheep would work in the real world. Security pro John Prisco thought Blacksheep could produce a large number of false-positives because most companies don’t have truly homogenous systems. “The problem with these systems in the past is that you get 10,000 changes, and the model becomes confused.”
On the other hand, security researcher Jerome Segura observes that gathering information among communities of computers is a valuable way to better protect all systems against threats. “The thing about a community,” Segura says, “is that you are getting information from resources you might not normally have access to, unless you had a big budget.”