• United States




It’s only intelligence if you use it

Apr 11, 20185 mins

Threat intelligence can be valuable, if you actually have it and you’re prepared to make use of it.

security threats and vulnerabilities
Credit: Thinkstock

It has been reported that C-level executives are suffering from a flood of threat intelligence. There is too much of it, it’s too complicated to make sense of, and its utility is questionable. As someone who has spent most of his career in intelligence, I can say with confidence that this isn’t a phenomenon limited to the commercial sector.

When I started my career we still used pencil and paper. You didn’t assault a decision-maker’s senses with 30 pages of PowerPoint, you wrote a report. One page. As the adoption of technology grew, we were able to assemble massive amounts data in new formats for user consumption. In theory, this should have been the start of a golden age in the intelligence business: more data, more analytic power, more insights in front of decision-makers.

Instead it became a nightmare.

What intelligence isn’t

People have been mistaking data for intelligence since before cyber threat intelligence became a thing. An IP address, a person’s name, a given piece of malware are different types of data. Lots of data assembled into a coherent whole is information. Information subjected to methodology and expert input is intelligence. Intelligence tells you something you did not already know, or gives you some measure of confidence, allowing you to make decisions in an informed manner. 

If you find yourself being overwhelmed by “intelligence” you didn’t buy intelligence you bought a data feed. Machines cannot produce intelligence. An algorithm may be able to process data in accordance with a given formula, but it cannot provide the insight that a subject matter expert —social, martial, political, technical, cultural, linguistic — can. Anything delivered to you as a decision-maker that isn’t finished by a human being isn’t intelligence.

Helping those who won’t help themselves

The flip side to the intelligence dilemma is that even if you are provided with well-sourced, well-analyzed, and timely intelligence, the utility of that intelligence is still zero if you’re unwilling to act on it. Instead of being something that helps you get ahead of the problem or competition, it becomes a lagging indicator of what happened because you did not act. History, in other words.

I saw this a lot when I was responsible for disseminating warnings to DOD elements about cyber threats. Such warnings had to fit within a larger framework that included physically dangerous near-peer adversaries like Russia or China. We fielded a lot of phone calls from frustrated Commanders asking why Tiny Nation was now a red light on their threat dashboard. To someone who is trying to stop bad guys and keep good guys alive, you’re an idiot who is unnecessarily complicating their lives.

The icing on that bureaucratic cake would usually come a month or so later when that command would get hacked, and why didn’t we tell them that was going to happen? Well, we did, you just decided that either (a) we were wrong, or (b) it wasn’t as big a problem as we said it was. Fair enough, because in the end every decision-maker is their own intelligence analyst, but the problem wasn’t the intelligence and related assessment.

Bias for action

Intelligence only works if both sides — producers and consumers — are working in harmony and fulfilling their respective roles. From the perspective of the intelligence team it is critical that you:

  • Avoid hyperbole and FUD. Stick to facts and what you can verify. Caveat questionable sources accordingly.
  • Avoid a rush to judgement. First reports of any sort of activity are usually wrong to some degree. Even when seconds count, it’s better to be right than first.
  • Provide a “so what” factor. Context is key The boss can read the news, which stories should she pay attention to, and why?
  • Put things into perspective. What’s the worst thing that can happen? What — given historical precedence — is most likely to happen? A decision is more likely when there is a range of options linked to business impact.

Likewise, it is on the consumer of intelligence to make effective use of what is being provided:

  • Make your priorities clear. What matters the most to you? That will help the intelligence team understand what to focus on.
  • Make your consumption preferences known. Do you want slides? Will you read a full page of text or just a paragraph? Would you prefer someone in front of your desk articulating the issues?
  • Use what you’re given. Whether it’s a command to act, or another question to answer, the third option — inaction — suggests you have more fundamental issues that need to be addressed before you can make effective use of intelligence.
  • Provide feedback. What did they do that was useful? What are they doing that wastes your time? Absent direction an intelligence team will produce what they think you want, not what you need.

Intelligence failures aren’t always about a failure to have the right information in the right hands at the right time; it is just as often if not more so a factor of decision-makers not believing what they are seeing, and as a consequence not acting. “Go back and look again,” is a completely legitimate request, but there is no such thing as perfect, complete intelligence. At some point a decision has to be made, and that’s something intelligence cannot do for you.


Michael Tanji currently serves as Chief Operating Officer of Senrio, an IoT security start-up. He was co-founder and Chief Security Officer at Kyrus Tech, a computer security services company, one of the co-founders of the original Carbon Black, and the former CEO of Syndis.

Michael began his career as a member of the U.S. Army’s Military Intelligence Corps, working in a number of positions of increasing responsibility in signals intelligence, computer security and information security. He is a veteran of Operation Desert Storm and was stationed in various locations in the U.S. and overseas.

After leaving active duty Michael worked as a civilian for the U.S. Army’s Intelligence and Security Command, leading a team of analysts and programmers supporting intelligence missions in the Pacific theater. His service with INSCOM culminated as the Technical Director of the J6 in his command, responsible for evaluating, acquiring and deploying information technology in support of intelligence collection and analysis missions.

Michael left INSCOM to join the Defense Intelligence Agency, where he deployed in a counterintelligence/human intelligence role in support of Operation Allied Force. He later served as the lead of the Defense Indications and Warning System, Computer Network Operations, responsible for providing strategic warning of cyber threats to the DOD. He was one of the handful of intelligence officers selected by-name to provide intelligence support to the Joint Task Force – Computer Network Defense, the predecessor to what would eventually become U.S. Cyber Command. His expertise led to his selection as his agency’s representative to numerous joint-, inter-agency, and international efforts to deal with cyber security issues, including projects for the National Intelligence Council, National Security Council, and NATO. After September 11, 2001 Michael created the DOD’s first computer forensics and intelligence fusion team, which produced the first intelligence assessments based on computer-derived intelligence from the early days of the war on terror.

After leaving government service in 2005 Michael worked in various computer security and intelligence roles in private industry. He spent several years as an adjunct lecturer at the George Washington University and was a Claremont Institute Lincoln Fellow.

Michael is the editor of and a contributor to Threats in the Age of Obama, a compendium of articles on wide-ranging national and international security issues. He has been interviewed by radio and print media on his experiences and expertise on security and intelligence issues, and had articles, interviews, and op-eds published in Tablet Magazine, Weekly Standard, INFOSEC Institute, SC Magazine and others.

Michael was awarded a bachelor’s degree in computer science from Hawaii Pacific University, a master’s degree in computer fraud and forensics from George Washington University, and earned the CISSP credential in 1999.

The opinions expressed in this blog are those of Michael Tanji and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.