In the world of security perimeter defenses, more is not necessarily better. This is particularly true with threat detection, where software discovering 90 million possible threats a week is really no more helpful than one that finds 9 million threats a week. Indeed, from a signal-to-noise ratio perspective, those additional discoveries may work\u00a0against\u00a0security, in that they make finding those 2,000 actual attack attempts more difficult. This is the much-dreaded alert fatigue dilemma.This is the problem that machine learning\u2014especially unsupervised machine learning\u2014aimed to solve. The premise was that unsupervised ML would quickly learn the patterns and, thereafter, instantly recognize a true threat and distinguish it from the ever-present network noise of a large company network.The hiccup with this theory is that unsupervised ML perimeters suffer from the same weakness as many antivirus systems: to identify a pattern of a serious attack, the system must be successfully victimized by that attack method at least once. But cyber-attack methods evolve and change over time. So, as long as cyber criminals continually develop new methods, ML defenses will never be absolute.Still, can ML be more effective than manual human alternatives? Often, the answer is "yes."But first, CISOs and CSOs must understand where ML works best and where it doesn\u2019t."We\u2019ve seen different use cases. Does it work well for phishing attacks? Yes. Complex social engineering attacks? No," says\u00a0Bindu Sundaresan, practice lead at AT&T Security Consulting. "It has ways to go as it\u2019s still a learning tool for us. The more data we feed into it, the better it gets."In some respects, the "it can't stop it until it's been hurt by it" problem isn't fair. First, human security analysts suffer from the identical flaw. Secondly, prior experience of the attack vector assumes that the system is looking for that specific pattern. ML instead looks for pattern deviations. In other words, it's not only looking for something that resembles an attack. It's also looking for atypical user behavior. And\u00a0that\u00a0is something that software tends to do far better than mammals."Humans cannot possibly deal with all the alerts they\u2019re seeing. AI will help with the triage piece,"\u00a0Sundaresan says. "Most SOC (security operations center) events are measured by how long it takes to triage an event. The newer technologies will help identify the behavior and take an action on it."A key factor in any ML analytics security strategy is: When does it make the most sense for humans to get involved? Alternatively, how far does it make sense to push the algorithms?\u00a0Sundaresan's point about ML code taking actions raises the question: Which actions should it take? Speed is essential in thwarting attacks, so there is an argument about whether checking with humans before taking an action makes sense.Another unknown factor here is cooperation between large companies in general, and direct competitors in particular. If all companies immediately share security incident details with a centralized source, the patterns associated with new attack methods could be identified much faster. Will companies overcome paranoia fears about any security details enough to trust an independent group with such highly sensitive information? For ML to ultimately deliver for enterprises, that trust\u2014even on a limited basis\u2014needs to happen.AT&T is at the forefront of research into how machine learning will benefit security. Learn more about AT&T Cybersecurity Consulting Services.