• United States



Contributing writer

The best of Black Hat: The consequential, the controversial, the canceled

Jul 18, 201716 mins
Data and Information SecurityHackingSecurity

Over the past two decades, the annual Black Hat conference has had its share of controversy. CSO looks back at the most significant talks and demonstrations.

Black Hat 2015
Credit: REUTERS/Steve Marcus

For two decades, Black Hat has gained a reputation for demonstrations of some of the most cutting-edge research in information security as well as development and industry trends. The event has also had its share of controversy – sometimes enough to cause last-minute cancelations.

Launched in 1997 as a single conference in Las Vegas, Black Hat has gone international with annual events in the U.S., Europe and Asia. This year’s U.S. event – the 20th – at Mandalay Bay in Las Vegas, begins July 22 with four days of technical training, followed by the two-day main conference.

CSO looks at some of the past Black Hat highlights – and a few that didn’t happen.

“Jackpotting Automated Teller Machines” – Barnaby Jack, 2010

The late white-hat hacker superstar took Black Hat by storm with his demonstration that he could make an ATM spew bills the way Vegas slot machines used to spew quarters, all with a few keystrokes from his laptop. Those in the audience described it as something out of the 1995 cyberpunk movie “Hackers.” It later became a plot line for a 2015 episode of the CBS TV crime drama “CSI: Cyber.”

[Related: 3 tips to get the most out of Black Hat/Defcon]

The talk was controversial in part because it was supposed to have happened a year earlier, but Jack’s then-employers pulled his planned 2009 presentation after ATM makers made legal threats. Well in advance of his 2010 presentation, he had alerted the manufacturers about the vulnerabilities he had found in time for them to have remediation in place.

“iOS Security” – Dallas De Atley, 2012

The fact that this talk happened at all was a very big deal – the first time notoriously secretive Apple had cleared an employee to discuss its internal security. It was viewed as an acknowledgement that, after Flashback and Mac Defender malware had infected the Mac OS X operating system, the company could no longer claim that its products “don’t get PC viruses.”

The big deal about the talk itself was that it ended up not being a big deal. De Atley, manager of the company’s Platform Security team, left the packed room “bored and deflated,” according to the New York Times, when he spent the hour essentially reading a white paper synched to a PowerPoint, and then left without taking any questions.

According to one Twitter review, “It was very, very meh.”

“Battery Firmware Hacking” – Charlie Miller, 2011

Miller, at the time a principle research consultant for Accuvant Labs, poked another hole in the prevailing wisdom that Apple devices were more secure than others, by demonstrating that he had figured out both of the passwords protecting the embedded controller in “smart” batteries used in MacBook, MacBook Pro and MacBook Air laptop computers, and once inside, brick them – make them unable to take a charge or discharge any power.

He was unable to achieve his main goal, which was to make the battery catch fire or blow up. That, he acknowledged as a loyal Apple user, was a good thing.

Miller, who has since become much more famous for hacking into the control systems of a Jeep with colleague Chris Valasek, told CNN after his presentation that Apple devices had actually become much more secure in the four years since he began hacking them.

“Cellphone Intercepts with Femtocells” – iSec Partners, 2013

The sign on the door of this session had a “proceed at your own risk” air about it: “Cellular interception demonstration in progress,” it said, adding in the fine print that among the interruptions users of CDMA devices could experience was, “loss of 911 service.”

If that was a bit unnerving, that was the point. Presenters Doug DePerry and Tom Ritter showed how, exploiting a vulnerability they found in the way mobile devices connect to a femtocell (miniature cell tower), they could eavesdrop and record voice calls, intercept incoming SMS and MMS messages, launch a man-in-the-middle attack to view Web sites being accessed, strip SSL from secure pages and even clone mobile devices without physical access to them – all from up to 40 feet away.

Femtocells – network devices offered by Verizon, Sprint and AT&T to boost a cellular signal – are designed to improve reception, but are, “a bad idea,” Ritter said, since phones will automatically connect to the tower with the strongest signal without user interaction or knowledge.

The demonstration was against a Verizon femtocell, and the two said Verizon had patched the vulnerability, but neither would comment on how effective it was.

“Hacking Medical Devices for Fun and Insulin, Breaking the Human SCADA System” – Jay Radcliffe, 2011

The threat from hacked medical devices is something Radcliffe, a security researcher, takes personally. As a type 1 diabetic, he has, as they say, skin in the game – he is connected to an insulin pump and glucose monitor all the time which, in his words, makes him something of a “human SCADA system.”

Jay Radcliffe hacker of medical insulin pump device REUTERS/Brian Losness

Jay Radcliffe 

He told the audience that when he was starting his research into the possibility of hacking into the wireless communication component of the device, he told his curious 5-year-old son, “I want to show that bad people can’t do things to dad.” Of course, what he found was that they could. One of the communication methods, using a USB thumb drive, had no authentication or encryption between the configuration tool and the device. While it had a serial number, that could be compromised through social engineering or brute force.

That would allow an attacker enough control, from as far as a half-mile away, to change settings that could be lethal. Radcliffe said the device had no way to notify the user that it had been modified. He told AP that he found the technology “really cool,” but he also confessed to, “sheer terror, to know that there’s no security around the devices which are a very active part of keeping me alive.”

“Black Ops 2008 – It’s the End of the Cache as We Know It” – Dan Kaminsky, 2008

It was hard to imagine better advance publicity. Kaminsky’s presentation, on a Domain Name System (DNS) flaw he discovered, had prompted an emergency summit months earlier with DNS vendor representatives, hosted by Microsoft, to create a fix. He disclosed it publicly early in July – about a month before the conference – with vendors simultaneously releasing patches on the same day in what Black Hat termed, “a combined effort of historic proportions.”

[Related: Black Hat basics: Ruminations on 19 years of Black Hat Briefings]

It made “cache poisoning” and “DNS flaw” into IT buzz phrases. All deserved, since the “gaping hole” he discovered would allow attackers to redirect users of nearly every DNS server in the world to malicious sites, hijack their email, steal their passwords, subvert legitimate updates, target FTP and SSL and more.

So, by the time Kaminsky took the stage it was standing room only. After hearing him say things were even worse than he had initially thought – that, “there are a ton of paths that lead to doom” – they gave him a standing ovation. To this day, his discovery is known as Kaminsky’s Flaw.

“Keynote” – General Keith Alexander, 2013

The fact that security guards confiscated eggs from some of those heading into the auditorium was one hint that Alexander’s appearance was controversial. The then-NSA director’s speech came less than three months after former NSA contractor Edward Snowden leaked thousands of documents showing that the agency was conducting mass surveillance on US citizens, with the cooperation of telecoms.

General Keith Alexander NSA Director Black Hat conference REUTERS/Steve Marcus

General Keith Alexander

Alexander was able to defuse at least some of the hostility. He insisted that NSA surveillance was not nearly as broad as reported. He said, “Not all the facts are on the table.” He said surveillance was necessary, and that much of it had to be classified because, “Terrorists use our communications.” He said surveillance had disrupted or prevented dozens of terrorist attacks, and that there were technical and policy restrictions in place that protected Americans’ privacy – he said he was unable to intercept his daughters’ emails – and that included “100 percent auditability.”

He did not address Snowden’s revelation of XKeyscore, a program that reportedly allowed analysts, without authorization, to use databases to monitor emails, other communications and the browsing history of anyone in the world. He called the damage from the Snowden revelations, “significant and irreversible.”

He concluded with a bit of a charm offensive. “You are the greatest gathering of technical talent anywhere in the world … I want you to help us make (the NSA) better,” he said. 

That produced what one report called “warm applause” at the end.

“Remote Exploitation of an Unaltered Passenger Vehicle” – Charlie Miller and Chris Valasek, 2015

There was some excellent advance notoriety for this one as well, along with proof that there is no such thing as bad publicity. Weeks earlier, Miller and Valasek had demonstrated what they would be presenting. With Wired reporter Andy Greenberg at the wheel of a Jeep Cherokee, driving 70 mph on a public highway on the outskirts of St. Louis and the two hackers sitting, miles away, at their computers, they were able to control the radio and air conditioning, kill the hazard lights, cut the transmission and put a picture of themselves on the car’s digital display.

Reuters_Christopher Valasek_Charlie Miller at Blackhat REUTERS/Steve Marcus

Christopher Valasek (L) and Charlie Miller 

This was a test of the research the two had been doing over the past year – the use of a zero-day exploit to take control of a car’s functions, including steering, brakes and transmission, through its entertainment system.

Readers of Greenberg’s account weren’t the only ones criticizing the danger of the stunt. He was as well, observing while in the car that was slowing to a crawl while traffic piled up behind him, “this is (expletive) dangerous.”

As all good white-hat hackers do, the two had shared their research with Chrysler well in advance, which allowed the company to patch the vulnerability ahead of the conference. All of which led to the predictable standing-room-only audience and weeks later, a change of employment – Miller, who had worked at Twitter, and Valasek, who was at IOActive, were both hired by Uber, to work for the company’s Advanced Technologies Center. Miller quit Uber earlier this year.

“Keynote: Cybersecurity as Realpolitik” – Dan Geer, 2014

Geer, one of the most incisive minds in the business with an ability to explain just how difficult it is to answer difficult questions, delivered what he promised – a series of recommendations on difficult issues presented “with all humility, (which) does not mean timidity.”

The CISO of In-Q-Tel, a not-for-profit investment firm that supports the CIA, confronted 10 of the cybersecurity world’s most vexing issues including mandatory reporting of breaches or other failures (above a certain threshold of severity), source code liability, striking (hacking) back, the right to be forgotten, internet voting, the open sourcing of abandoned code bases (think crowdsourcing security for Windows XP), and convergence.

Geer had “yes” or “no” answers for very few of them. As he had noted at the beginning, there are four harsh realities of government:

  • Most important ideas are unappealing
  • Most appealing ideas are unimportant
  • Not every problem has a good solution
  • Every solution has side effects

He concluded with a definition of realpolitik: “What is successful is right and what is unsuccessful is wrong, that there is no moral dimension in how the world is, and that attempting to govern based on principles cannot succeed. Realpolitik is at once atheistic and anti-utopian. “I find that distasteful,” he said.

Good thing.

“Keynote: A story about Digital Security in 2017” – Richard Clarke, 2007

A decade ago, the former chief counter-terrorism adviser on the U.S. National Security Council for portions of both the Clinton and George W. Bush administrations, took a shot at predicting the digital world of today. It hasn’t turned out that way yet – one of them was on a project to reverse engineer the human brain. “You might be able to add memory to the brain as easily as to laptop today. If you can add memory, you can download it, and if you can download it, maybe that memory will outlive the person.”

Richard Clarke U.S. Counterterrorism official and security advisor REUTERS/Christian Charisius

Richard A. Clarke

Not yet, but the core of his message – that the astounding progress and benefits of the convergence of artificial intelligence, nanotechnology, biotechnology, robotics and pharma is, “based on assumption – the same assumption we base our economy on today – that cyberspace is secure. And it’s not – I don’t need to tell you,” he said.

He said that would require much better authentication and encryption – things on which there have been progress, but not enough to defeat cyber criminals and nation state hacking and espionage.

Clarke, now chairman of Good Harbor Consulting, didn’t hesitate to get political, with most of the venom aimed at Bush. He said the former president and his staff were against the kind of genetic engineering that could, “enhance the human brain – the human being. “When you see some of their cabinet members, you know why,” he said, arguing that a debate over, “what it means to be human” will be happening in 20 years, “and we need to start thinking about it now.”

“The Battle for Free Speech on the Internet” – Matthew Prince, 2015

It has been said many times that the First Amendment exists to protect what many consider reprehensible or offensive speech, since popular, inoffensive speech needs no protection. Matthew Prince, wunderkind entrepreneur and CEO of CloudFlare, which accelerates and protects Web content, told the Black Hat audience that philosophy needs to guide companies like his that facilitate, and to some extent control, web content.

Matthew Prince CEO CloudFlare REUTERS/Gerry Shih

Matthew Prince

He said early on they grappled with their, “role as a provider. Were we going to be the content police – take things off the Internet because we judge them to be bad – or be the anarchists who say all content is good and leave things up no matter what? Or is the right line between those places?”

Walking that line, he said, means that CloudFlare amounts to, “kind of a Switzerland of the Internet” – a neutral party that is frequently disliked, but respected. That role, he said, means providing services to “some strange bedfellows” – polar opposites – using their services, such as Hamas and the Israeli Defense Forc, pro-Ukraine and pro-Russian force, Occupy Wall Street and Goldman Sachs.

“If you sign up to be the neutral party, you’re going to piss everyone off,” he said. “But I don’t know a better way to do it. It’s easy to say you should kick all the bad sites off your network, but it’s hard to say exactly what is bad.”

Prince said his company does follow the laws of different countries. “We follow China rules in China,” he said, “but that doesn’t mean China dictates what’s on theinternet as a whole. We follow what US courts say.

The biggest danger to free speech on the Internet, he said, is that a very small number of companies like his – he mentioned Amazon, Akamai, Facebook and Google – amount to “choke points” for what is allowed. That, he said, is good for his business. “But as a citizen of the Internet, it scares the living sh– out of me,” he said. “It’s too much power in a small set of hands.”

“Bad USB – on Accessories that Turn Evil” – Karsten Nohl, Jakob Lell, 2014

It has been widely known, at least since 2010 after the Stuxnet attack on Iranian nuclear facilities at Natanz, that USB drives could be used to launch cyber attacks.

White-hat hackers Karsten Nohl and Jakob Lell, of Security Research Labs, in a hack they called BadUSB, took it a lot further.

In the promo for their talk, they noted that while USB sticks “undergo the occasional virus scan, we consider USB to be otherwise perfectly safe – until now.” The two demonstrated that by injecting malware that operates from controller chips inside USB devices, they could reprogram them to spoof other device types to, “take control of a computer, exfiltrate data or spy on the user.

Best (or worst) of all, the self-replicating virus was, “not detectable with current defenses.” In other words, you can’t trust but verify, since antivirus scans wouldn’t find it. “The sky’s the limit as to what a remote attacker can do,” Nohl said. What to do? “Lock your firmware down,” Nohl said. “Make sure it’s not going to be reprogrammed. Burn it once and never go back.”

Canceled talks

“The Chinese Cyber Army: An Archaeological Study from 2001 to 2010” – Armoize, 2010

A researcher from Armorize, a Taiwanese security vendor, was scheduled to speak on China’s government-backed hacking initiatives and its ability to launch cyber attacks.

According to an executive at Armorize, the company decided to cancel the presentation after several Taiwanese and Chinese organizations that had contributed to the report wanted it pulled. The executive did not give the reasons for their complaints.

“RFID for beginners” – Chris Paget, 2007

In the promotion for his presentation, Paget of IOActive said, “I’ll explain everything you need to know in order to build a working cloner, understand how it works, and see exactly why RFID is so insecure and untrustworthy.” The talk was pulled after secure card maker HID Corp. objected in a letter that claimed possible patent infringement.

An HID spokeswoman acknowledged that the company’s RFID proximity cards were vulnerable to hacks, but said Paget was exaggerating the risk and by showing how to hack them would endanger their customers. IOActive argued that the concepts behind its research were not new and simply illustrated potential security shortcomings of contactless building access controls that HID had noted in the past.

“Given the threat of pending litigation, we had no choice but to cancel the talk,” said Joshua Pennell, IOActive president. Black Hat’s Jeff Moss said the move by HID, “is a threat to the conference business. It will reach a point where everything will be dumbed down and everything we can discuss will come from a sales sheet from a product manufacturer. I don’t like it at all. It doesn’t bode well for security research.”

“Weaknesses in Apple’s FileVault” – Charles Edge, 2008

This one ignited controversy not only over its proposed topic, but over whether it was ever scheduled in the first place. Edge, a security researcher, said he had been scheduled to discuss a weakness he had found in Apple’s FileVault encryption system, but then canceled it a week before the conference. He told Brian Krebs, then at the Washington Post, that he had, “signed confidentiality agreements with Apple, which prevents him from speaking on the topic and from discussing the matter further.”

Krebs wrote at the time, “these kinds of reversals have a funny way of stoking the curiosity of the hacker community, already an inquisitive bunch by nature.” Black Hat organizers said, however, that the talk had never been officially scheduled.