Americas

  • United States

Asia

Oceania

maria_korolov
Contributing writer

How AI can help you stay ahead of cybersecurity threats

Feature
Oct 19, 201714 mins
Data and Information SecurityMachine LearningSecurity

Artificial intelligence and machine learning can be force multipliers for under-staffed security teams needing to respond faster and more effectively to cyber threats.

artificial intelligence face on top of computer grid
Credit: Thinkstock

Since the 2013 Target breach, it’s been clear that companies need to respond better to security alerts even as volumes have gone up. With this year’s fast-spreading ransomware attacks and ever-tightening compliance requirements, response must be much faster. Adding staff is tough with the cybersecurity hiring crunch, so companies are turning to machine learning and artificial intelligence (AI) to automate tasks and better detect bad behavior.

What are artificial intelligence and machine learning?

In a cybersecurity context, AI is software that perceives its environment well enough to identify events and take action against a predefined purpose. AI is particularly good at recognizing patterns and anomalies within them, which makes it an excellent tool to detect threats.

Machine learning is often used with AI. It is software that can “learn” on its own based on human input and results of actions taken. Together with AI, machine learning can become a tool to predict outcomes based on past events.

Using AI and machine learning to detect threats

Barclays Africa is beginning to use AI and machine learning to both detect cybersecurity threats and respond to them. “There are powerful tools available, but one must know how to incorporate them into the broader cybersecurity strategy,” says Kirsten Davies, group CSO at Barclays Africa.

For example, the technology is used to look for indicators of compromise across the firm’s network, both on premises and in the cloud. “We’re talking about enormous amounts of data,” she says. “As the global threat landscape is advancing quite quickly, both in ability and collaboration on the attacker side, we really must use advanced tools and technologies to get ahead of the threat themselves.”

AI and machine learning also lets her deploy her people for the most valuable human-led tasks. “There is an enormous shortage of the critical skills that we need globally,” she says. “We’ve been aware of that coming for quite some time, and boy, is it ever upon us right now. We cannot continue to do things in a manual way.”

The bank isn’t alone. San Jose-based engineering services company Cadence Design Systems, Inc., continually monitors threats to defend its intellectual property. Between 250 and 500 gigabits of security-related data flows in daily from more than 30,000 endpoint devices and 8,200 users — and there are only 15 security analysts to look at it. “That’s only some of the network data that we’re getting,” says Sreeni Kancharla, the company’s CISO. “We actually have more. You need to have machine learning and AI so you can narrow in on the real issues and mitigate them.”

Cadence uses these technologies to monitor user and entity behavior, and for access control, through products from Aruba Networks, an HPE company. Kancharla says that the unsupervised learning aspect of the platform was particularly attractive. “It’s a changing environment,” he says. “These days, the attacks are so sophisticated, they may be doing little things that over time grow into big data exfiltration. These tools actually help us.”

Even smaller companies struggle with the challenge of an overload of security data. Daqri is a Los Angeles-based company that makes augmented reality glasses and helmets for architecture and manufacturing. It has 300 employees and just a one-person security operations center. “The challenge of going through and responding to security events is very labor-intensive,” says Minuk Kim, the company’s senior director of information technology and security.

The company uses AI tools from Vectra Networks to monitor traffic from the approximately 1,200 devices in its environment. “When you look at the network traffic, you can see if someone is doing port scans or jumping from host to host, or transferring out large sections of data through an unconventional method,” Kim says.

The company collects all this data, parses it, and feeds it into a deep learning model. “Now you can make very intelligent guesses about what traffic could potentially be malicious,” he says.

It needs to happen quickly. “It’s always about the ability to tighten up the detection and response loop,” he says. “This is where the AI comes in. If you can cut the time to review all these incidents you dramatically improve the ability to know what’s happening in your network, and when a critical breach happens, you can identify and respond quickly and minimize the damage.”

AI adoption for cybersecurity increasing

AI and machine learning are making a significant difference in how fast companies can respond to threats, confirmed Johna Till Johnson, CEO at Nemertes Research. “This is a real market,” she says. “There is a real need, and people are really doing it.”

Nemertes recently conducted a global security study, and the average time it took a company to spot an attack and respond to it was 39 days — but some companies were able to do it in hours. “The speed was correlated with automation, and you can’t automate these responses without using AI and machine learning,” she says.

Take detection, for example: “The median time for detection is one hour,” she says. “High-performing companies typically do this in under 10 minutes — low performing companies take days to weeks. Machine learning and analytics can bring this time to effectively zero, which is why the high-performing companies are so fast.”

Similarly, when analyzing threats, the median time is three hours. High performing companies take just minutes, others take days or weeks. Behavioral threat analytics have already been deployed by 21 percent of the companies surveyed, she says, and another 12 percent says they would have it in place by the end of 2017.

Financial services firms in particular are on the leading edge she says, since they have high-value data, tend to be ahead of the curve on cybersecurity, and have money to spend on new technologies. “Because it’s not cheap.”

When it comes to broader applications of AI and machine learning, the usage numbers are even higher. According to a Vanson Bourne survey released on October 11, 80 percent of organizations are already using AI in some form. The technology is already paying off. The single biggest revenue impact of AI was in product innovation and R&D, with 50 percent of respondents saying the technology was making a positive difference, followed by customer service at 46 percent and supply chain and operations at 42 percent. Security and risk wasn’t far behind, with 40 percent seeing bottom-line benefits.

The numbers are likely to keep going up. According to a recent Spiceworks survey, 30 percent of organizations with more than 1,000 employees are using AI in their IT departments, and 25 percent plan to adopt it next year.

Seattle-based marketing agency Garrigan Lyman Group is deploying AI and machine learning for a number of cybersecurity tasks, including monitoring for unusual network and user activity and to spot new phishing emails. Otherwise, it’s impossible to keep up, says Chris Geiser, the company’s CTO. “The hackasphere is a volunteer army and it doesn’t take much education or knowledge to get started,” he says. “They automated their operations a long time ago.”

AI and machine learning gives the company an edge. Although the company is small — just 125 employees — cloud-based deployment makes it possible to get the latest technology, and get it quickly. “We can have those things up and running and adding value within a couple of weeks,” he says. The Garrigan Lyman Group has deployed AI-enabled security tools from Alert Logic and Barracuda, and Geiser says that he can see the products getting smarter and smarter.

In particular, AI can help tools adapt quickly to a company’s requirements without significant up-front training. “For example, an AI model can automatically learn that for some companies if the CEO is using a non-corporate email address it is anomalous,” says Asaf Cidon, VP of content security services at Barracuda Networks, Inc. “In other companies, it is totally normal for the CEO to use their personal email when they are communicating from their mobile device, but it would not be normal for the CFO to send emails from their personal address.”

Another benefit of cloud delivery is that it’s easier for vendors to improve their products based on feedback from their entire customer base. “Cybersecurity is a lot like neighborhood watch,” Geiser says. “If I didn’t like what I saw on the other end of the block, it tips everyone off that there could be a problem.”

In the case of phishing emails or network attacks, new threats can be spotted when they first show up in other time zones, giving companies hours of early warning. That does require a level of trust in the vendor, Geiser says. “We’ve gone on reputation, references, on a number of different due diligence paths to make sure that the vendors are the right vendors to use, and follow best practices for audit and compliance to make sure that only the right person has access,” he says.

As companies first transition from manual processes to AI-based automation, they look for another kind of trust — in addition to having visibility into the vendors’ operations, it helps to have visibility into the AI’s decision-making process. “A lot of the AI out there right now is this mysterious black box that just magically does stuff,” says Mike Armistead, CEO and co-founder at Respond Software, Inc. “The key in expert systems is to make it transparent, so people trust what you do. That gets even better feedback, and creates a nice virtuous cycle of reinforcing and changing the model as well.”

“You always need to know why it made the decision,” confirmed Matt McKeever, CISO at LexisNexis Legal and Professional. “We need to make sure, do we understand how the decision was made.”

The company recently began using GreatHorn to secure email for its 12,000 employees. “If we start getting emails from a domain that looks similar to a legitimate one, it will flag it as a domain look-alike, and it tells us, ‘We flagged it because it looks like a domain you normally talk to, but the domain header flags don’t look right,'” says McKeever. “We can see how it figured that out, and we can say, ‘Yes, that absolutely makes sense.'”

As the level of trust increases, and accuracy rates improve, LexisNexis will move from simply flagging suspicious emails to automatically quarantining them. “So far, the results have been really good,” McKeever says. “We have high confidence that we’re flagging is malicious email, and we’ll start quarantining it, so the user won’t even see it.”

After that, his team will expand the tool into other divisions and business areas at LexisNexis that use Office 365, and look at other ways to take advantage of AI for cybersecurity as well. “This is one of our early forays into machine learning for security,” he says.

How AI gets ahead of the threat landscape

AI gets better with more data. As vendors accumulate large data sets, their systems can also learn to spot very early indications of new threats. Take SQL injections, for example. Alert Logic collects about half a million incidents every quarter for its 4,000 customers, about half of which are SQL injection incidents. “There’s not a security company in the world that can look at each one of those with a human set of eyes and see if that SQL injection attempt was a success or not,” says Misha Govshteyn, Alert Logic’s cofounder and SVP of products and marketing.

With machine learning, the vendor is not only able to process the events more quickly, but also correlate them across time and geography. “Some attacks take more than a couple of hours, sometimes days, weeks, and in a few cases months,” he says. “Not only are they taking a long time to execute, but also coming from different parts of the Internet. I think these are incidents that we would have missed before we deployed machine learning.”

Another security vendor that is collecting a large amount of information about security threats is GreatHorn, Inc., a cloud-based email security vendor that works with Microsoft’s Office 365, Google’s G Suite, and Slack. “We’re now sitting on almost 10 terabytes of analyzed threat data,” says Kevin O’Brien, the company’s co-founder and CEO. “We’re starting to feed that information into a tensor field so we can start to plot relationships between different kinds of communications, different kinds of mail services, different kinds of sentiments in messaging.”

That means that the company can spot new campaigns and send messages to quarantine, or put warning banners on them days before they’re conclusively identified as threats. “Then we can retroactively go back and take them out of all email inboxes where they were delivered,” he says.

Where AI for cybersecurity is headed next

Looking for suspicious patterns in user behavior and network traffic is currently the low-hanging fruit for machine intelligence. Current machine learning systems are getting good at spotting unusual events in high volumes of data and carrying out routine analysis and responses.

The next step is to use artificial intelligence to tackle more thorny problems. For example, the real-time cyber risk exposure of a company depends on a large number of factors. Those include unpatched systems, insecure ports, incoming spear phishing emails, number of privileged accounts and insecure passwords, amount of unencrypted sensitive data, and whether it is currently being targeted by a nation-state attacker.

Having an accurate picture of its risks would help a company deploy resources most efficiently, and create a set of metrics for cybersecurity performance other than whether the company has been breached or not. “Today, if you were to try to describe your environment, this data is either not being gathered correctly or not being converted into information,” says Gaurav Banga, founder and CEO at Balbix, Inc., a startup that is specifically trying to tackle the problem of predicting the risk of a breach.

AI is key to solving that challenge. “We have 24 different types of AI algorithms,” Banga says. “We produce a bottom-up model, a risk heat map that covers every aspect of the environment, clickable so you can go down and see why something is red. It is prescriptive, so it tells you that if you can do these things, it can become yellow and eventually green. You can ask questions — ‘What is the number one thing I can do now?’ or ‘What is my phishing risk?’ or ‘What is my risk from WannaCry?'”

In the future, AI will also help companies determine what new security technologies they need to invest in. “Most companies today don’t know how much to spend on cybersecurity and how to spend it,” says James Stanger, chief technology evangelist at CompTIA. “I think we need AI to help provide metrics, so that as a CIO turns around and talks to the CEO or talks to the board, and says, ‘Here’s the money we need and here are the resources we need,’ and have the true and useful metrics to justify those costs.”

There’s a lot of room for progress, says Alert Logic’s Govshteyn. “There is very little use of AI in the security space,” he says. “I think we’re actually behind other industries. It’s amazing to me that we have self-driving cars before we have self-defending networks.”

In addition, today’s AI platforms don’t actually have an understanding of the world. “What these technologies are very good at are things like classification of data based on similar data sets that they’ve been trained on,” says Steve Grobman, CTO at McAfee LLC. “But AI isn’t really intelligent. It doesn’t understand the concept of an attack.”

As a result, a human responder is still a critical component of a cyber defense solution. “In cyber security, you’re trying to detect an adversary who is also human and is trying to thwart your detection techniques,” Grobman says.

That’s different from any other areas where artificial intelligence is currently being applied, such as image and speech recognition or weather forecasting. “It’s not like the hurricane is saying, ‘I’m going to change the laws of physics and make water evaporate differently to make it more difficult to track me,'” says Grobman. “But in cybersecurity, that’s exactly what’s happening.”

Progress is being made on that front. “There’s a research area called generative adversarial networks, where you have two machine learning models where one tries to detect something and the other sees if something was detected and tries to bypass it,” says Sven Krasser, chief scientist at CrowdStrike, Inc. “You can use things like that for red teaming, for figuring out what new threats can be.”

More on AI in security