Americas

  • United States

Asia

Oceania

joltsik
Contributing Writer

Measuring the Quality of Commercial Threat Intelligence

Opinion
Jul 22, 20154 mins
Cisco SystemsCybercrimeData and Information Security

One person’s quality is another person’s fluff so objective measurements will be difficult. Threat intelligence quality may ultimately be gauged through crowdsourcing and threat intelligence sharing.

In my most recent blog, I described how a recently-published ESG research report on threat intelligence revealed a number of issues around commercial threat intelligence quality (note: I am an ESG employee).  As part of a recent survey of cybersecurity professionals working at enterprise organizations (i.e. more than 1,000 employees), ESG found that:

  • 72% of enterprise cybersecurity professionals believe that at least half of the information contained in commercial threat intelligence feeds /services is redundant regardless of the source.
  • 74% of enterprise cybersecurity professionals say that it is extremely difficult or somewhat difficult to determine the quality and efficacy of each individual threat intelligence feed.

I suggested that large organizations may overcome this problem over time as they deploy threat intelligence consolidation and analysis platforms (TICAPs) based upon open source CRITS, or purchase commercial offerings from vendors like BrightPoint Security, ThreatGRID, and TreatQuotient, or use threat intelligence integration features in SIEM platforms like LogRhythm, QRadar, and Splunk.  Since TICAPs provide correlation tools and common dashboards, SOC personnel and malware analysts will be able to assess which threat intelligence feeds recognizes each threat first, which provide the most details about cyberattacks, which contains the fewest false positives, etc.

After posting this blog, I received a fair amount of email comments and suggestions about commercial threat intelligence quality so I decided to provide a few additional editorial comments about commercial threat intelligence and threat intelligence sharing:

  1. When describing the characteristics of their threat intelligence, most vendors default to quantitative metrics emphasizing how many enterprise customer-based devices act as sensors or how many Honeypots they have deployed at strategic locations across the Internet.  They’ll also tell you how much data they collect on a daily basis or how many malware samples they see.  So here’s the problem – these metrics are meaningless to the market as quantity doesn’t necessarily equate to quality.  CERT professionals want to know details about what’s in commercial threat intelligence, not how much threat intelligence information is contained within each commercial threat intelligence feed.  It is incumbent upon commercial threat intelligence vendors to shift their messaging from quantity to quality as the market interprets their current quantity metrics with an old database viewpoint, ‘garbage in, garbage out (GIGO).’
  2. Several of my contacts suggested that the industry would benefit from some type of 3rd party testing of commercial threat intelligence and I totally agree.  I’d like to see NSS Labs, VirusTotal, or some other testing organization take this on as soon as possible.  That said, there are only so many attributes about threat intelligence that can be objectively measured.  I could certainly assess which commercial threat intelligence feed was first to discover a rogue IP address, 0-day exploit, or malicious file but that’s not the whole story at all.  For example, vendor A may discover a new piece of malware hours before vendor B, but vendor B may provide far more contextual information about the malware and threat actors involved, including details on tactics, techniques, and procedures (TTPs).  So you could say create some type of formula like quality = time + details, but what if these details are inaccurate or irrelevant?  Hmm, seems pretty subjective to me.
  3. Closely related to my last point, one person’s critical threat intelligence is another person’s noise.  This is the rationale behind ISACs that share threat intelligence across a common industry.  So any 3rd party quality assessment must be multi-dimensional, considering generic threats, targeted threats, industry threats, geographic threats, etc.  How the heck do you measure quality in this scenario?

Ultimately, I believe that commercial threat intelligence quality will be measured internally through the use of TICAPs as I described above.  Additionally, individual organizations will share their quality assessments and metrics with other trusted organizations through peer-to-peer, ad-hoc, and formal threat intelligence sharing relationships.  Imagine a network of organizations using some type of standard like STIX/TAXII, IODEF, or JSON to describe, compare, and share various commercial threat intelligence quality metrics. 

In my humble opinion, this is how commercial threat intelligence quality will be judged in the future, by public consensus, crowdsourcing, and shared observations rather than objective metrics.  Unfortunately, this will take a while as threat intelligence sharing processes, standards, and technologies are incredibly immature.  Until then, commercial threat intelligence quality will remain amorphous and subjective at best. 

joltsik
Contributing Writer

Jon Oltsik is a distinguished analyst, fellow, and the founder of the ESG’s cybersecurity service. With over 35 years of technology industry experience, Jon is widely recognized as an expert in all aspects of cybersecurity and is often called upon to help customers understand a CISO's perspective and strategies. Jon focuses on areas such as cyber-risk management, security operations, and all things related to CISOs.

More from this author