• United States




AI will not solve your security analytics issues

Nov 02, 20175 mins
AnalyticsArtificial IntelligenceData and Information Security

Implementing new AI based analytical solution for your SOC is tempting, but it would address the symptoms, rather than the root cause of the issues your organization faces.

security automation robot protects defends from attack intrusion breach
Credit: Thinkstock

Managing SOC is not pretty. Constant stress due to avalanche of tickets and vast amounts of data to analyze using often underpowered and sometimes outdated tools, combined with high turnover and low morale staff. It is understandable that in such environment everybody is looking for a miracle.

Any new technology that has a capability to automate an analysis and detect anomalies gets attention of operations security. With an amount of hype surrounding AI, the temptation is great to jump into early adoption.

Before deciding to adopt any new analytics tool, you should make an analysis of costs vs benefits. The cost does not necessary means the money spent on acquisition, implementation and management of new solution, but rather additional resource utilization and additional latency the solution is going introduce.

The top issue many SOCs are facing is the volume of analytical activity either due to poor definition of investigation triggers or low quality of input data. These will result in multiple false positives that tax analyst time and decrease team morale due to useless routine. There is a truth in an argument that cognitive biases affecting analysts may result in real security incidents being missed or dismissed. However, an automated pattern recognition solution, does not address either of root causes.

AI, as a pattern recognition technology, is an evolutionary step from last generation machine learning technologies, such as behavior analytics and crude statistical analysis applied in the beginning of the century. The big advantage of AI over last generation technologies is faster learning and less data required for learning. This, however, does not mean that AI can make an intelligent decision better than human analyst.

There are three things we should keep in mind when evaluating current generation of AI solutions:

1. These solutions employ a statistical, rather than behavioral AI

It relies on optimization of backpropagation, a method, where “weights” or linear adjustments are used to adjust each of independent variables used in evaluating the input data. The idea is to create an equation to determine how close the evaluated data is to pre-defined pattern and provide an answer with certain degree of assurance. There are several issues with this approach, the biggest one being a guaranteed number of false positives, no matter how good your pattern recognition algorithm is. This is one of the reasons some AI scientists call for getting rid of backpropagation as a basis for statistical AI.

2. Data quality is still extremely important for AI learning

Last generation of behaviour analytics products heavily relied on data quality in the initial learning cycle and AI is not that different. AI based product can recognize anomalies as “normal” through two mechanisms: ingestion of bad data during learning cycle and manual adjustment by a user marking something as a false positive. If you have data quality issues on information you ingest into your security analytics issues, these may get amplified by using AI, resulting in higher analytical overhead on the staff.

3. AI will not make a binary yes/no decision…

…but rather tell with certain degree of (un)certainty that it identified something (fitting the pre-set models). This means, your team need to set up an uncertainty threshold that minimizes the number of false positives (staff utilization overhead), while keeping your false positive (security incident in making) as close to zero as possible.

Once AI based product has identified a suspicious pattern above your threshold, SOC team will be engaged for further analysis that may escalate into investigation and potentially into incident management. These are the stages where you will still rely on human analytics that is a subject to numerous cognitive biases. Elimination of false pattern recognition in routine analytics has only a limited effect as the risk remains during escalated investigation activities.

Considering all the above, AI will only introduce a marginal improvement in security analytics from both performance, quality and resource utilization. The only way to increase usefulness of AI is to increase effectiveness of security analytics by addressing its weaknesses.

Data quality is the single biggest input issue hindering security analytics. Often, security is piggybacking on Enterprise Monitoring for data collection. This results in pulling unnecessary data, missing data, and lack of details in the data to make a correct automated decision. In addition, a data streams lacks an ingestion sync with some being available in near real-time, while other data arrives in batches at pre-defined time periods. The discrepancy in data collection approach means that some of data correlation cannot be executed or is significantly delayed.

There is no single solution to data quality problem other than individually looking at each of data streams, trying to enforce standardization and consistency. This also requires a collaboration with non-security operational teams that is often weaker than desired in many enterprises.

Increasing data quality will yield a significant increase in both capabilities AI based analytical platforms and overall efficiency of security analytics.  For further improvement, focus on the output that triggers investigation and incident response. Automated or manual analytical results interpretation and validation by investigation or incident response team needs de-risking from biased decisions. The best way to do this is via collaborative analysis, where multiple analysts would view the data and results from multiple perspectives (i.e. network vs. system vs. application behavior). This means revisiting the response processes and playbooks, as well as investment in an incident management platform that can track and coordinate analytics tasks and present results to all team members for further review and scrutiny.

AI technologies will continue evolving and will provide significant value over time. For now, focus on improving your security analytical practice as a readiness exercise for AI adoption.


Alexander Poizner is information security expert, leader and entrepreneur. Beginning his technology career at the age of 15 as one of software developers on Human Genome Project, Alexander has experienced the evolution of cybersecurity threats and technologies since late nineties. Specializing on security architecture, strategy and management, Alexander worked in large retail, e-commerce and professional services organizations before launching his own professional services company. After a merger with IntelliGO Networks, he remained in the role of VP of Operations, leading MSSP, Engineering and PMO teams.

Alexander currently works on his new security venture, Parabellyx and advises security start-ups on product strategy. He also researches effects of cognitive biases on security analytics and incident response.

Alexander holds B.App.Sc degree in Electrical Engineering from University of Toronto and multiple security designations.

The opinions expressed in this blog are those of Alexander Poizner and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.