Almost every day, there\u2019s news about a massive data leak -- a breach at Yahoo that reveals millions of user accounts, a compromise involving Gmail phishing scams. Security professionals are constantly moving the chess pieces around, but it can be a losing battle.\n\nYet, there is one ally that has emerged in recent years. Artificial intelligence can stay vigilant at all times, looking for patterns in behavior and alerting you to a new threat.\n\nWhile AI is not anywhere close to being perfect, experts tell CSO that machine learning, adaptive intelligence, and massive data models that can spot hacking much faster than any human are here to help.\n\n\u201cThere are some groundbreaking AI solutions built around cyber security analytics,\u201d says George Avetisov, the CEO and Cofounder of biometric security company HYPR.\n\n\u201cThe processes behind threat intelligence and breach discovery have remained incredibly slow due to the need for a human element. AI is transforming the speed at which threats are identified and attacks are mitigated by greatly increasing the speed at which such intelligence is processed.\u201d\n\nAccording to Avetisov, the big change has to do with removing the rules-based engine that have been in use at larger companies for decades. An AI adapts and learns about threats in real-time. They can analyze large data sets that are often fragmented and overlap with one another.\n\nIn this scenario, he says, the role of a human operator is to weed out false positives and, to an ever-increasing degree, make sure the data sets fed into an AI engine are accurate and robust. In some ways, it could be said that an AI is only as intelligent as the data it analyzes. What\u2019s interesting is that an AI can also predict behavior based on current data sets, adapting your own security infrastructure based on what could potentially lead to a breach.\n\nNovel approaches\n\nFor now, AI is mostly used for malware detection, spotting phishing attacks, and blocking brute-force intrusions.\n\nIn the future, AI could be added to services we all rely on each day. In Gmail, for example, when you receive an email that looks legitimate, an AI can scan countless variables -- such as the originating IP address, location data, the word choice and phrasing in the email, and other factors -- and alert you to a phishing scam.\n\nOne of the most interesting uses for AI in blocking attacks has to do with classification. Mark Testoni, the president and CEO of enterprise security company SAP NS2, told CSO that an AI can quantify the level of threat in ways that would normally require much more human effort.\n\n\u201cAn AI has supervised learning capabilities using neural networks for entity and pattern recognition for intrusion detection systems and event forensics applications,\u201d says Testoni. \u201cThey can classify entities and events to reduce mean time to identification of problems, and analyze the behavior behind the attacks. For example, what does the attacker want, how will it affect my organization, what aspects of my business are at most risk and the impact analysis of the attack itself?\u201d\n\nAnother area of focus: Having an AI inspect all network traffic. Today, it can be difficult to block a harmful email or attachment because there may not be a rule about the data yet or the harmful agent has not been detected yet. Forensic security tends to look at the damage after it takes place. However, as Nathan Wenzler, the chief security strategist at AsTech Consulting, explained, an AI can ingest the data, look for patterns, and block network traffic in real-time.\n\nFred Wilmot, the interim CEO\/CTO of threat detection company PacketSled, made an interesting point about all of these AI advancements. In the coming months and years, security professionals will rely more on machine learning, and their role might change to become more like AI engineers who create the learning models. For now, the AI is still not mature enough, especially for the fraud detection and mitigation that takes place in the financial sector.\n\nThe dark side of using AI to fight hacking\n\nAvetisov did mention one dark side. While security professionals can rely on AI to help block malware attacks or other intrusions, hackers are also leaning on AI. It\u2019s a counter-offensive, because the hackers are using machine learning as well to find weak endpoints.\n\n\u201cHackers are just as sophisticated as the communities that develop capability to defend themselves against hackers,\u201d says SAP NS2\u2019s Testoni. \u201cThey are using the same techniques, such as intelligent phishing, analyzing behavior of potential targets to determine what type of attack to use, \u201csmart malware\u201d that knows when it is being watched so it can hide.\u201d\n\n\u201cWe've seen more and more attacks over the years take on morphing characteristics, making them harder to predict and defend against,\u201d says Wenzler from AsTech Consulting. \u201cNow, leveraging more machine learning concepts, hackers can build malware that can learn about a target's network and change up its attack methodology on the fly.\u201d\n\nNeill Feather, the president of website security company Sitelock, did note that the AI programming someone might use for criminal hacking is more complex, and there are higher costs involved. The incentive will remain as long as the unethical AI leads to more breaches.\n\nIn the end, the cyber war will continue -- quite possibly between the AI bots.