The Rise of Anti-Forensics

New, easy to use antiforensic tools make all data suspect, threatening to render computer investigations cost-prohibitive and legally irrelevant

1 2 Page 2
Page 2 of 2

As long as five years ago, Grugq was creating antiforensic tools. Data Mule is one in his package that he calls the Defiler’s Toolkit. Likewise, Liu developed Timestomp, Slacker and other tools for the Metasploit Framework. In fact, a good portion of the antiforensic tools in circulation come from noncriminal sources, like Grugq and Liu and plain old commercial product vendors. It’s fair to ask them, as the overwhelmed cop in London did, why develop and distribute software that’s so effective for criminals?

Grugq’s answer: “If I didn’t, someone else would. I am at least pretty clean in that I don’t work for criminals, and I don’t break into computers. So when I create something, it only benefits me to get publicity. I release it, and that should encourage the forensics community to get better. I am thinking, Let’s fix it, because I know that other people will work this out who aren’t as nice as me. Only, it doesn’t work that way. The forensics community is unresponsive for whatever reason. As far as that forensic officer [in London] was concerned, my talk began and ended with the problem.”

Liu agrees but takes it further. He believes developing antiforensics is nothing less than whistle-blowing. “Is it responsible to make these tools available? That’s a valid question,” he says. “But forensic people don’t know how good or bad their tools are, and they’re going to court based on evidence gathered with those tools. You should test the validity of the tools you’re using before you go to court. That’s what we’ve done, and guess what? These tools can be fooled. We’ve proven that.”

For any case that relies on digital forensic evidence, Liu says, “It would be a cakewalk to come in and blow the case up. I can take any machine and make it look guilty, or not guilty. Whatever I want.”

Liu’s goal is no less than to upend a legal precedent called the presumption of reliability. In a paper that appeared in the Journal of Digital Forensic Practice, Liu and coauthor Eric Van Buskirk flout the U.S. courts’ faith in digital forensic evidence. Liu and Van Buskirk cite a litany of cases that established, as one judge put it, computer records’ “prima facie aura of reliability.” One decision even said computer records were “uniquely reliable in that they were computer-generated rather than the result of human entries.” Liu and Van Buskirk take exception. The “unfortunate truth” they conclude, is that the presumption of reliability is “unjustified” and the justice system is “not sufficiently skeptical of that which is offered up as proof.”

It’s nearly a declaration that, when it comes to digital information, there’s no such thing as truth. Legally anyway. As Henry likes to put it, “Antiforensic tools have rendered file systems as no longer being an accurate log of malicious system activity.”

Computer forensics in some ways is storytelling. After cordoning off the crime scene by imaging the hard drive, the investigator strings together circumstantial evidence left at the scene, and shapes it into a convincing story about who likely accessed and modified files and where and when they probably did it. Antiforensics, Liu argues, unravels that narrative. Evidence becomes so circumstantial, so difficult to have confidence in, that it’s useless. “The classic problem already with electronic crimes has been, How do you put the person you think committed a crime behind the guilty machine they used to commit the crime?” says Brian Carrier, another forensic researcher, who has worked for the Cerias infosecurity research program at Purdue University. Upending the presumption of reliability, he says, presents a more basic problem: How do you prove that machine is really guilty in the first place? “I’m surprised it hasn’t happened yet,” says Liu. “But it will.”

Under the current computing infrastructure, data is untrustworthy, then. The implications of this, of courts limiting or flat-out denying digital forensics as reliable evidence, can’t be understated. Without the presumption of reliability, prosecution becomes a more severe challenge and thus, a less appealing option. Criminals reasonably skilled with antiforensics would operate with a kind of de facto legal immunity.

Making It Not Worth It

Despite all that, casting doubt over evidence is just a secondary benefit of antiforensics for criminals. Usually cases will never get to the legal phase because antiforensics makes investigations a bad business decision. This is the primary function of antiforensics: Make investigations an exercise in throwing good money after bad. It becomes so costly and time-consuming to figure out what happened, with an increasingly limited chance that figuring it out will be legally useful, that companies abandon investigations and write off their losses.

“Business leaders start to say, ‘I can’t be paying $400 an hour for forensics that aren’t going to get me anything in return,’” says Liu. “The attackers know this. They contaminate the scene so badly you’d have to spend unbelievable money to unravel it. They make giving up the smartest business decision.”

“You get to a point of diminishing returns,” says Sartin. “It takes time to figure it out and apply countermeasures. And time is money. At this point, it’s not worth spending more money to understand these attacks conclusively.”

One rule hackers used to go by, says Grugq, was the 17-hour rule. “Police officers [in London’s forensics unit] had two days to examine a computer. So your attack didn’t have to be perfect. It just had to take more than two eight-hour working days for someone to figure out. That was like an unwritten rule. They only had those 16 hours to work on it. So if you made it take 17 hours to figure out, you win.” Since then, Grugq says, law enforcement has built up 18-month backlogs on systems to investigate, giving them even less time per machine.

“Time and again I’ve seen it,” says Liu. “They start down a rat hole with an investigation and find themselves saying, ‘This makes no sense. We’re not running a business to do an investigation.’ I’ve seen it at Fortune 100s. The company says, ‘We think we know what they got and where. Let’s close it up.’ Because they know that for every forensic technique they have, there’s an antiforensic answer. Unfortunately, the converse isn’t true.”


By now, it should be clear why Henry of Secure Computing has been giving a presentation called “Anti-Forensics: Considering a Career in Computer Forensics? Don’t Quit Your Day Job.” The state of forensics certainly sounds hopeless, and Henry himself says, “The forensics community, there’s not a hell of a lot they can do.”

But in fact there’s some hope. Carrier says, “Yes, it makes things a lot harder, but I don’t think it’s the end of the world by any means.” What can start to turn the tables on the bad guys, say these experts and others, is if investigators embrace a necessary shift in thinking. They must end the cat-and-mouse game of hack-defend-hack-defend. Defeating antiforensics with forensics is impossible. Investigations, instead, must downplay the role of technology and broaden their focus on physical investigation processes and techniques: intelligence, human interviews and interrogations, physical investigations of suspects’ premises, tapping phones, getting friends of suspects to roll over on them, planting keyloggers on suspects’ computers. There are any number of ways to infiltrate the criminal world and gather evidence. In fact, one of the reasons for the success of antiforensics has been the limited and unimaginative approach computer forensic professionals take to gathering evidence. They rely on the technology, on the hard disk image and the data dump. But when evidence is gathered in such predictable, automated ways, it’s easy for a criminal to defeat that.

“I go back to my background as a homicide detective,” says the investigator in the aquarium case. “In a murder investigation, there is no second place. You have to win. So you come at it from every angle possible. You think of every way to get to where you want to go. Maybe we can’t find the source on the network with a scanning tool. So you hit the street. Find a boss. His boss. His boss. You find the guy selling data on the black market. The guy marketing it on [Internet Relay Chat]. You talk to them. They’re using stego? Maybe we drop some stego on them. The techniques used in physical investigations are becoming increasingly important.”

Indeed, if one looks back on some of the major computer crimes in which suspects were caught, one will notice that rarely was it the digital evidence that led to their capture. In the case of Jeffrey Goodin of California, the first ever under the Can-Spam Act, it was a recorded phone call with a friend who had flipped on the suspect that led to the conviction. In the case of the Russian botnet operators who had extorted millions from gaming sites, it was an undercover operation in which a “white hat” hacker befriended the criminals. In the United Kingdom, says Grugq, the police are using social modeling to try to penetrate antiforensics used on mobile phones for drug dealing. “The police’s goal is to get a confession,” he says. “They don’t care if they have compelling evidence off the disk.” In the TJX case, the only arrests made to date are based on purchases of exorbitant gift cards at the company’s retail stores, caught on tape.

It will be the interviews with those people, and not system analysis, that will lead to more information and, potentially, more arrests in the case.

“Every successful forensics case I’ve worked on turned into a physical security investigation,” says Bill Pennington, a researcher at White Hat Security and veteran technical forensics investigator. “In one case, it was an interview with someone who turned on someone else. You layer the evidence. Build it up. He sees the writing on the wall, and he cracks. But if we had to rely on what the computer evidence told us, we would have been stuck.”

Moving Targets

Behind the portfolio of easy-to-use Windows-based antiforensic tools, criminal hackers are building up a next-generation arsenal of sophisticated technical tools that impress even veterans like Grugq. “There are now direct attacks against forensic tools,” he says. “You can rootkit the analysis tool and tell it what not to see, and then store all your evil stuff in that area you told the analysis tool to ignore. It is not trivial to do, but finding the flaw in the analysis tool to exploit is trivial.”

Another new technique involves scrambling packets to avoid finding data’s point of origin. The old-school way of avoiding detection was to build up a dozen or so “hop points” around the world—servers you bounced your traffic off of that confounded investigations because of the international nature of the traffic and because it was just difficult to determine where the traffic came from, really. The state-of-the-art antiforensic technique is to scramble the packets of data themselves instead of the path. If you have a database of credit card information, you can divvy it up and send each set of packets along a different route and then reassemble the scatterlings at the destination point—sort of like a stage direction in a play for all the actors to go wherever as long as they end up on their mark.

The aquarium attack, two years later, already bears tinges of computer crime antiquity. It was clever but today is hardly state of the art. Someday, the TJX case will be considered ordinary, a quaint precursor to an age of rampant electronic crime, run by well-organized syndicates and driven by easy-to-use, widely available antiforensic tools. Grugq’s hacking mentor once said it’s how you behave once you have root access that’s interesting. In a sense, that goes for the good guys too. They’ve got root now. How are they going to behave? What are they going to do with it? “We’ve got smarter good guys than bad guys right now,” says Savid Technologies’ Davis. “But I’m not sure how long that will be the case. If we don’t start dealing with this, we’re not even going to realize when we get hit. If we’re this quiet community, not wanting to talk about it, we’re going to get slammed.”

Send feedback to Senior Editor Scott Berinato at

2002-2007 CXO Media Inc. All rights reserved.

Reproduction in whole or in part without permission is prohibited.

Dated: June 01, 2007

Copyright © 2007 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
7 hot cybersecurity trends (and 2 going cold)