False positives\u2014or alerts that incorrectly indicate a security threat is present in a specific environment\u2014are a major problem for security operations centers (SOCs). Numerous studies have shown that SOC analysts spend an inordinate amount of time and effort chasing down alerts that suggest an imminent threat to their systems that turn out to be benign in the end.Research that Invicti conducted recently found that SOCs waste an average of 10,000 hours and some $500,000 annually on validating unreliable and incorrect vulnerability alerts. Another survey that Enterprise Strategy Group (ESG) conducted for Fastly found organizations reporting an average of 53 alerts a day from their web applications and API security tools. Nearly half (45%) are false positives. Nine in ten of the respondents in the survey described false positives as having a negative impact on the security team."For SOC teams,\u00a0false\u00a0positives\u00a0are one of the biggest pain points," says Chuck Everette, director of cybersecurity advocacy at Deep Instinct. A SOC\u2019s primary focus is to monitor for security events and to investigate and respond to them in a timely manner. "If they are inundated with hundreds or thousands of alerts that have no true security significance, this distracts them from responding efficiently and effectively to real threats," he says.Eliminating false positives entirely from the environment can be near impossible. There are, however, ways that SOCs can minimize time chasing them down. Here are five of them:1. Focus on the threats that matter\u00a0When configuring and tuning security alerting tools such as intrusion detection systems and security information and event management (SIEM) systems, make sure you define rules and behavior that alert you only on the threats that are relevant to your environment. Security tools can aggregate a lot of log data, not all of which is necessarily relevant from a threat standpoint to your environment.The deluge of\u00a0false\u00a0positives\u00a0that most SOCs manage is the byproduct of one of three things, says Tim Wade, technical director, CTO team at Vectra. "First, correlation-based rules often lack the capacity to express a sufficient number of features necessary to raise both detection sensitivity and specificity to actionable levels," he says. As a result, detections can often surface threat behaviors but fail to distinguish them from benign behaviors.The second issue is that behavioral-based rules that focus principally on anomalies are good at retrospectively finding threats, he says. They often fail though to generate a signal to act. "In the scale of any enterprise, 'weird is normal,'" he says. That means anomalies are the rule, not the exception, so chasing down every single anomaly is a waste of time and effort."Thirdly, SOCs lack the sophistication in their own incident classification to distinguish malicious true\u00a0positives from benign true\u00a0positives." This results in benign true positives being lumped in the same category as false positives, so effectively burying data that can help enable iterative improvements in detection engineering efforts, Wade says.The primary causes of\u00a0false\u00a0positives\u00a0are a failure by SOCs to understand what a true indicator of compromise looks like in their specific environment and a lack of good data to test rules, says John Bambenek, principal threat hunter at Netenrich. Many security entities routinely put out indicators of compromise as part of their research, but sometimes a valid indicator of compromise is not sufficient on its own to indicate a threat to a specific environment. A threat actor may use Tor. So, its presence is relevant. That doesn\u2019t mean every use of Tor is a signal of a specific threat actor's presence on a network. "Most research requires contextual information, and many companies are behind the ball in creating contextual detections," he says.\u00a02. Don\u2019t fall prey to the base rate fallacySecurity practitioners often make the mistake of taking a vendor's claims about low false positive rates too literally. Just because a SOC tool might claim a false positive rate of 1% and a false negative rate of 1% doesn\u2019t mean the probability of a true positive is 99% says, Sounil Yu, CISO\u00a0at JupiterOne. Because legitimate traffic typically tends to be magnitudes higher than malicious traffic, true positive rates often can fall well below what security managers might intrinsically expect at first. "The actual probability that it's a true\u00a0positive\u00a0is much lower, and that probability decreases even further depending upon how many total events are processed," he says.As an example, he points to a SOC that might be handling 100,000 events daily, of which 100 are real alerts and 99,900 are\u00a0false\u00a0alarms. In this scenario, a 1%\u00a0false\u00a0positive\u00a0rate means the security team would have to chase down 999 false alerts and the probably of a true positive is just 9% Yu says. "If we increase the number of events to 1,000,000 while keeping the number of actual alarms to 100, the probability drops further to less than 1%."The main takeaway for administrators is that small differences in a\u00a0false\u00a0positive\u00a0rate can significantly affect the number of\u00a0false\u00a0alarms that SOC teams need to chase down, Yu notes. So, it's important that detection rules are continuously tuned to reduce\u00a0false\u00a0positive\u00a0rates and to automate the initial investigation of alerts as much as possible. Security teams should also resist the tendency to feed more data than is needed into their detection engines. "Instead of arbitrarily stuffing more data into your detection pipelines, ensure that you only have the data that you need to process your detection rules and leave the other data for automated enrichment afterwards," he says.3. Hack your own networkSOC analysts are often more fatigued chasing down low-impact security alerts than they are dealing with false positives, says Doug Dooley, COO at Data Theorem. This can happen, for instance, when security teams are organized to look for code hygiene problems that may or may not ever be exploitable in the production app instead of focusing on problems that have a material business impact. "secops team can easily be bogged down by non-mission critical alerts which are unfairly categorized as 'false\u00a0positives'," Dooley says.It's only when the security team works closely with business leaders that they can focus on what really matters and filter out the noise. "If a data breach of your most popular mobile app could substantially damage your brand, lower your stock price, and likely make you lose customers, then focusing on exploitable vulnerabilities in your app stack has high business priority."Instead of focusing on theoretical attacks and scenarios, Dooley recommends that organizations conduct breach tests on their own systems to verify if any exploitable vulnerabilities that might exist can be compromised. Such testing and verification can build trust and credibility between security operations teams and devops teams he says.4. Maintain good records and metricsMaintaining records of investigations that became a wild goose chase is a good way to minimizing the chances of that happening again. To improve detection and to finetune alerts, SOCs need to be able to filter out noise from actionable signals. That can only happen when organizations have data they can look back at and learn from."In a world of limited time, resources, and attention, every time effort is expended on a\u00a0false\u00a0positive,\u00a0the business accrues some risk that an actionable signal is being ignored," Wade at Vectra says. "It's hard to overstate the need for SOCs to maintain effective records and metrics of their investigations to improve their detection engineering efforts over time." Unfortunately, for many SOCs the chaos of the moment tends to overrun the long-term planning efforts necessary to improve, he says.Security alerting tools should have a feedback mechanism and metrics that allow defenders to track\u00a0false\u00a0positive\u00a0rates by providers and information sources, says Bambenek. "If you are using a data lake of security telemetry, you can also look at indicators and new rules against previous data to get some idea of\u00a0false\u00a0positive\u00a0rates as well," he says.5. Automation alone is not enough\u00a0Automation, when implemented correctly, can help alleviate challenges related to alert overload and skills shortages in modern SOCs. However, organizations need a skilled staff\u2014or have access to one via a managed service provider for instance\u2014to get the most out of their technology."With the time to manually confirm each vulnerability clocking in at one hour, teams could be spending a whopping 10,000 hours taming false positives annually," says Sonali Shah, chief product officer at Invicti. Yet, more than three quarters of respondents in Invicti's survey said they either always or frequently manually verify vulnerabilities. In these situations, automation that is integrated within existing workflows can help alleviate challenges associated with false positives.To get the most from the technology, SOCs need operators that can tune logging and detection tools and develop the scripts or custom tools that glue vendor tools together, says Daniel Kennedy, an analyst with S&P Global Market Intelligence. Operators that have knowledge gained over time about the custom nature of their organization's technology nature are especially useful, he says. They can help SOCs save time by examining daily reports for patterns, developing playbooks, tuning vendor tools, and introducing levels of appropriate automated response.Alerts, events, and logs must be tuned, says Deep Instinct\u2019s Everette. Subject matter experts must configure the system to ensure that only high-fidelity alerts are brought to the surface and that corresponding event triggers are set to ensure elevated prioritized response when needed. To do this effectively, organizations must correlate and analyze data from multiple sources such as security logs, events, and threat data. Security alerting tools "are not a set and forget type of mechanism," he says. \u00a0To get the most out of alerting tools, SOCs need to look for opportunities to augment and enhance each tool's capabilities to reduce the number of\u00a0false\u00a0positives\u00a0and to raise the effectiveness of their overall security stance.