Ever tried setting up a honeypot on your network?
Most find the experience frustrating. In many cases, what starts as a novel idea is quickly abandoned in favor of other projects. It’s just too hard to get right. Often, the value of the effort is uncertain. A lot of security feels that way.
Imagine my surprise when I met Haroon Meer(@haroonmeer), the founder of Thinkst, to discuss his take on honeypots in the form of Canary. Prior to Thinkst, Haroon spent over a decade as the Technical Director of a Penetration Testing Company. That experience breaking into networks and applications around the world lead to the creation of new techniques, too. He formed Thinkst in 2010 to bring the same sort of thinking to bear on building solutions.
We talked about honeypots.
That’s misleading. What we talked about was far greater. And during the discussion, Haroon and I shared a video where he walked me through the ease of setting up the Canary. It was exciting to witness a security tool with thoughtful design. I really liked how Haroon and his team considered how to repurpose current technology and security solutions. All around fascinating.
That led to a conversation about how security and UX (user experience) aren’t truly at odds. Here are the five questions with Haroon Meer.
What got you thinking about honeypots?
In previous careers we’ve done penetration tests for companies all over the world, and a fairly consistent theme was how few companies (if any) detected our incursions. We’ve written hundreds of reports with the words: “What should be of great concern, is not just that we were able to compromise the target, but that even now, our presence on the network remains undetected”. This was not because we were super-ninjas, but because largely, the state of breach (and compromise) detection is so poor. If you take any of the high profile breaches of the past 5 years, what immediately sticks out is how long the attackers lurked on the victim networks before they were discovered. (In most cases, the victim companies only realised that they were breached when contacted by the press or other 3rd parties.) This seemed like a ridiculous problem to us and one that we started thinking more and more about solving.
While doing some strategic consulting with a client, we noticed that they were about to retire a number of old desktop machines. We suggested they build them out and deploy them as simple sensors to detect malicious activity on their sensitive segments. (The suggestion served multiple purposes: Junior members of staff would get some experience setting up and monitoring these boxes & we believed there was a reasonable chance they would pick up “badness” on their network). Four months later, we returned and found the pile of machines in their SoC, with only a couple machines half installed (and none successfully deployed). We tried to reinvigorate the idea, and management and staff bought in, but a subsequent visit found the machines in the same corner of the room. (With all of the fires they were dealing with, they just didn’t have the time to make it happen).
This was an organization that realistically faced nation-state level adversaries, with an above average chance that their networks were already compromised, and in terms of detection capabilities, they were just about at square-1. Sadly, this isn’t as rare as it should be.
On top of this we knew that dropping IDS sensors would still have the problem of mountains of false positives (or at least really long learning cycles). This had us searching for more suitable ideas.
You realized a disconnect between how we used honeypots and they could be used differently. What did you figure out?
Inundating their SoC with IDS alerts seemed a fair way to waste resources (and to get the company tied up with busy-work), but what if we could go another way? We then started thinking about using those spare machines to mimic existing systems on their network. An attacker who was looking for the CFO’s desktop \\CFO_01 would be just as likely to plunder \\CFO_02 wouldn’t he?
This took us back to honeypots, but with a slight twist on the old idea.
Over the past few decades, the Honeynet alliance did great work raising awareness of security in general, and honeypots in particular. We believe though, that their primary slogan: “To learn the tools, tactics and motives of the blackhat community” led honeypots down a strange path. For the most part, they became tools to “study the blackhat in his native environment”. Of course this seems academically interesting, but then relegates the tool’s usefulness to academia.
The key insight then, is that we shouldn’t deploy boxes that look vulnerable on the network, we should deploy boxes that look valuable instead! When the criminals who broke into Target were on the internal network, they went looking for loot (not vulnerable IIS servers). When Snowden (now one of the most famous examples of an “insider attack”) bounced from share to share on the NSA network, he wasn’t looking for vulnerable servers to compromise, he was looking for file-shares to pillage!
In this sense, our Canaries are not deployed to “study hackers” or “learn the tools of the blackhat trade”. Like their name suggests, they are early warnings that your other controls have failed. They are a heads-up that an internal user is poking about where he/she shouldn’t.
What is the benefit to the security leader of rethinking the role of honeypots?
All of the recent breaches make it clear that despite millions (and sometimes hundreds of millions) of dollars being spent on cyber security, most organizations have no clue when hackers are burrowing into their networks or are moving laterally within them. Worse still, most have no clue when malicious insiders are pillaging servers from within. This is a ridiculous situation to be in and is a simple one for security leaders to test.
Would you know if Bob from accounting spent his free time looking at open windows shares and copying files ? Would you know if external attackers had broken in and were dropping implants / trojans on your machines? (When you last did a penetration test, how long did it take before your team caught the infiltrators?)
If you don’t have this visibility, then a honeypot is a great alternative. It offers the ancillary benefit of not drowning you in alerts. Intrusion detections systems typically flood the consoles of distraught security staff who then spend all their time trying to separate the wheat from the chaff. With our canaries, you almost never hear from them, unless something real has happened: Someone has found the host, inspected it, accessed it and has possibly tried to copy files off it. This generates a single, unambiguous event that needs to be reacted to, and this paucity of alerts is actually a breath of fresh air.
Honeypots are hard. You’ve built something that looks easy. How did you do it and what did you learn in the process?
We worked really really hard to make sure that deploying Canaries would be painless. For us, this wasn’t a nice-to-have, it was one of our strongest product requirements. (If you consider the genesis of the product, it is clear that the only possible way the solution would work, was if deploying and maintaining them placed very little burden on the already harried security team).
To this end, we went through several iterations of the product, polishing edges and rounding corners to make the process painless. What’s worth noting here, is that we didn’t end up adding more and more dials or knobs that can be tuned. We instead slimmed down the interfaces, and used engineering to avoid having clients having to think hard about unimportant choices. Some of these are small things, but some for example, are as large as building an entire communication overlay network on top of DNS. (Aside from being technically cool, this means that Canaries can be deployed on multiple foreign branches without having to make holes in firewalls and DMZ’s to allow them to communicate).
This concept of not giving users a thousand tunable parameters is a fairly new one, and we think is one that will start catching on. Until now, security tools and products exposed dials and knobs making everything tunable. While this does indeed offer the user options, you start to see that many users end up with a form of decision fatigue (and worse yet, many will just use the un-tuned version for ever). Incidentally, this is also how we end up with things like GPG, which are useful at their core, but are seldom adopted by the mainstream. (For reference, the GPG manual page is over 16,000 words long).
It seems to me that we (the developers of security products) need to realise that our products now compete in the marketplace with products that have invested heavily in good UX and design, and that increasingly, these become table stakes for building a product. (Actually delighting the customer becomes a requirement for organic sales & growth).
We are pretty pleased to see this bearing fruit. Our twitter timeline is dotted with actual customers from all over the world expressing genuine love for our canaries. This is pretty unusual for software products in general and almost completely unheard of for a security product.
What shift in thinking is necessary for security leaders looking to improve their results?
The community often complains about the quality and types of tools pushed into the market, but then often make inadvertent decisions that reward “bad” behaviour. This leads us to a situation where the market is dominated by tools that don’t really help us as much as they could.
A typical example of this, is “generated alerts”. Demo’s and client pitches do best with with a large number of alerts and with dashboards that fill up quickly. It means the product is doing something. Operationally though, even just 100 daily alerts can quickly overwhelm a SoC leaving them chasing shadows. Its for this reason that we have worked incredibly hard to make sure that our Canaries are not overly “chatty”. They send out one alert, when it matters.
You also see this pop up with “trendy” products or when people talk about the “hot technology du jour”. A little while back, every product released, needed to include a link to the “dark web” or to “machine-learning” (and “big data”). These are products that appeal to the inflight magazine reader (and might add little actual value to the teams doing the work on the ground).
Where this really, really bothers me, is when you see these incredibly complex solutions that almost always sit dusty on shelves (or sit in a corner only half implemented.) This happens for a number of reasons, but chief amongst them is because people measure purchases on a list of features (does it do X?, does it also do Y?, how about XY?) - This then has the developers / product companies race to add features to win the checklist war. (Simply carpet bombing a client with features may be effective to make a sale, but really isn’t that helpful on the ground. It always reminds me of the old saying: “I didn’t have the time to write you a short letter, so i wrote you a long one instead”).
This strategy has the ancillary benefit for the product company that now makes money selling professional services, training and certification to use their tools. This is hamster wheel of pain. You buy a product to help you with a problem, and then you need to certify your staff or change everything you do to manage the solution to the problem? This sounds more to me like just inherited a few more problems.
It really doesn’t have to be this way.
Security leaders need to look beyond the checklists, and need to demand more from their suppliers. With the knowledge though, that if it’s properly thought out, more really could be less.