Although it dates as far back as the 1950s, Artificial Intelligence (AI) is the hottest thing in technology today.An overarching term used to describe a set of technologies such as text-to-speech, natural language processing (NLP) and computer vision, AI essentially enables computers to do things normally done by people.Machine learning, the most prominent subset of AI, is about recognizing patterns in data and computer learning from them like a human. These algorithms draw inferences without being explicitly programmed to do so. The idea is the more data you collect, the smarter the machine becomes.At consumer level, AI use cases include chatbots, Amazon\u2019s Alexa and Apple\u2019s Siri, while enterprise efforts see AI software aim to cure diseases and optimize enterprise performance, such as improving customer experience or fraud detection.There is plenty to back-up the hype; A Narrative Science survey found that 38 percent of enterprises are already using AI, growing to 62 percent by 2018, with Forrester Research predicting a 300 percent year-on-year increase in AI investment this year. AI is clearly here to stay.Security wants a piece tooUnsurprisingly given the constant evolution of criminals and malware, InfoSec also wants a piece of the AI pie.With its ability to learn patterns of behavior by sifting through huge datasets, AI could help CISOs by finding those \u2018known unknown\u2019 security threats, automating SOC response and improving attack remediation. In short, with skilled personnel hard to come by, AI fills some (but not all) of the gap.Experts have called for the need of a smart, autonomous security system, and American cryptographer Bruce Schneier believes that AI could offer the answer.\u201cIt is hyped, because security is nothing but hype, but it is good stuff,\u201d said the CTO of Resilient Systems.\u201cWe\u2019re a long way off AI from making humans redundant in cybersecurity, but there\u2019s more interest in [using AI for] human augmentation, which is making people smarter. You still need people defending you. Good systems use people and technology together.\u201dMartin Ford, futurist and author of \u2018Rise of the Robots\u2019, says both white and black hats are already leveraging these technologies, such deep learning neural networks.\u201cIt's already being used on both the black and white hat sides,\u201d Ford told CSO. \u201cThere is a concern that criminals are in some cases ahead of the game and routinely utilize bots and automated attacks. These will rapidly get more sophisticated.\u201d\u201c...AI will be increasingly critical in detecting threats and defending systems. Unfortunately, a lot of organizations still depend on a manual process -- this will have to change if systems are going to remain secure in the future.\u201dSome CISOs, though, are preparing to do just that.\u201cIt is a game changer,\u201d Intertek CISO Dane Warren said. \u201cThrough enhanced automation, orchestration, robotics, and intelligent agents, the industry will see greater advancement in both the offensive and defensive capabilities.\u201dWarren adds that improvements could include responding quicker to security events, better data analysis and \u201cusing statistical models to better predict or anticipate behaviors.\u201dAndy Rose, CISO at NATS, also sees the benefits: \u201cSecurity has always had a need for smart processes to apply themselves to vast amounts of disparate data to find trends and anomalies \u2013 whether that is identifying and stopping spam mail, or finding a data exfiltration channel.\u201cPeople struggle with the sheer volume of data so AI is the perfect solution for accelerating and automating security issue detection.\u201dSecurity use cases sees start-ups boomSecurity providers have always tried to evolve with the ever-changing threat landscape and AI is no different.However, with technology naturally outpacing vendor transformation, start-ups have quickly emerged with novel AI-infused solutions for improving SOC efficiency, quantifying risks and optimizing network traffic anomaly detection.Relative newcomers Tanium, Cylance and - to lesser extent - LogRhythm have jumped into this space, but it\u2019s start-ups like Darktrace, Harvest.AI, PatternEx (coming out of MIT), and StatusToday that have caught the eye of the industry. Another relative unknown, SparkCognition, unveiled what it called the first AI-powered cognitive AV system at BlackHat 2016.The tech giants are now playing with AI in security too; Google is working on AI-based system which replaces traditional CAPTCHA forms and its researchers have taught AI to create its own encryption. IBM launched Watson for Cyber Security earlier this month, while in January Amazon acquired Harvest.AI, which uses algorithms to identify important documents and IP of a business, and then user behavior analytics with data loss prevention techniques to protect them from attack.Some describe these products as \u2018first-gen\u2019 AI security solutions, primarily focused on sifting through data, hunting for threats, and facilitating human-led remediation. In the future, AI could automate 24x7 SOCs, enabling workers to focus on business continuity and critical support issues.\u201cI see AI initially as an intelligent assistant \u2013 able to deal with many inputs and access expert level analytics and processes,\u201d agrees Rose, adding AI will support security professionals in \u201chigher level analysis and decisions.\u201dIgnacio Arnaldo is chief data scientist at PatternEx, which offers an AI detection system that automates tasks in SecOps, such as the ability to detect APTs from network, applications and endpoint logs. He says that AI offers CISOs a new level of automation.\u201cCISOs are well aware of the problems - they struggle to hire talent, and there are more devices and data that need to be analyzed. CISOs acknowledge the need for tools that will increase the efficiency of their SOCs. AI holds the promise but CISOs have not yet seen an AI platform that clearly\/proves to increase human efficiency.\u201d\u201cMore and more CISOs fully understand that the global skills shortage, and the successful large-scale attacks against high maturity organizations like Dropbox, NSA\/CIA, and JPMorgan are all connected,\u201d says Darktrace CTO Dave Palmer, whose firm provides machine learning technology to thousands of companies across 60 countries worldwide.\u201cNo matter how well funded a security team is, it can\u2019t buy its way to high security using traditional approaches that have been demonstrably failing and that don\u2019t stand a chance of working in the anticipated digital complexity of our economy in 10 years\u2019 time.\u201dAI underdone by basics, cybercrimeBut for all of this, some think we\u2019re jumping the gun. AI, after all, seems a luxury item in an era in which many firms still don\u2019t do regular patch management.At this year\u2019s RSA conference, crypto experts mulled how AI is applicable in security, with some questioning how to train the machine and what the human\u2019s role is. Machine reliability and oversight were also mentioned, while others suggested it\u2019s odd to see AI championed given security is often felled by low-level basics.\u201cI completely agree,\u201d says Rose. \u201cSecurity professionals need to continually reassess the basics \u2013 patching, culture, SDLP etc. \u2013 otherwise AI is just a solution that will tell you about the multitude of breaches you couldn\u2019t, and didn\u2019t, prevent.\u201dSchneier sees it slightly differently. He believes security can be advanced and yet still fail at the basics, while he poignantly notes AI should only be for those who have got the security posture and processes in place, and are ready to leverage the machine data.Ethics, he says, is only an issue for full automation, and he\u2019s unconcerned about such tools being utilized by black hats or surveillance agencies.\u201cI think this is all a huge threat,\u201d says Ford, disagreeing. \u201cI would rank it as one of the top dangers associated with AI in the near to medium term. There is a lot of focus on "super-intelligent machines taking over"...but this lies pretty far in the future. The main concern now is what bad people will do when they have access to AI.\u201dWarren agrees there are obstacles for CISOs to overcome. \u201cIt is forward thinking, and many organizations still flounder with the basics.\u201dHe adds that with these AI benefits will come challenges, such as the costly rewriting of apps and the possibility of introducing new threats. \u201c...Advancements in technology introduce new threat vectors.\u201d\u201cA balance is required, or the environment will advance to a point where the industry simply cannot keep pace.\u201dAI security is no panaceaAI and security is not necessarily a perfect match. As Vectra CISO Gunter Ollmann blogged about recently, buzzwords \u201chave made it appear that security automation is the same as AI security\u201d - meaning there\u2019s a danger of CISOs buying solutions they don\u2019t need, while there are further concerns over AI ethics, quality control and management.Arnaldo critically points out that AI security is no panacea either. \u201cSome attacks are very difficult to catch: there are a wide range of attacks at a given organization, over various ranges of time, and across many different data sources.\u201cSecond, the attacks are constantly changing...Therefore; the biggest challenge is training the AI.\u201dIf this points to some AI solutions being ill-equipped, Palmer adds further weight to the claim.\u201cMost of the machine learning inventions that have been touted aren\u2019t really doing any learning \u2018on the job\u2019 within the customer\u2019s environment. Instead, they have models trained on malware samples in a vendor\u2019s cloud and are downloaded to customer businesses like anti-virus signatures. This isn\u2019t particularly progressive in terms of customer security and remains fundamentally backward looking.\u201dSo, how soon can we see it in security?\u201cA way off,\u201d notes Rose. \u201cRemember that the majority of IPS systems are still in IDS mode because firms lack the confidence to rely on \u2018intelligent\u2019 systems to make automated choices and unsupervised changes to their core infrastructure. They are worried that, in acting without context, the \u2018control\u2019 will damage the service \u2013 and that\u2019s a real threat.\u201dBut the need is imperative: \u201cIf we don't succeed in using AI to improve security, then we will have big problems because the bad guys will definitely be using it,\u201d says Ford.\u201cI absolutely believe increased automation and ease of use are the only ways in which we are going to improve security, and AI will be a huge part of that,\u201d says Palmer.Transform our thinking with your comments on our Facebook page.