Americas

  • United States

Asia

Oceania

jon_king
Security Technologist and Intel Principal Engineer

Cloud vs. Cloud

Opinion
Nov 15, 20164 mins
Security

Poisoning machine learning

Machine learning is coming to help you distinguish sophisticated attacks from the noise of everyday usage, identify anomalous behavior that may be malicious, and block attacks from your system before you even know they exist. Machine learning works best in the cloud, feeding on large amounts of data from multiple sources, supported by elastic compute resources for analysis, to build sophisticated models of behavior. These models can then be used either locally or in the cloud to identify friend from foe.

Attacks that do not require lateral movement or privilege escalation are harder to detect. These are the sorts of activities that machine learning is being used to catch today, such as data thefts with stolen credentials, or exfiltration by insiders.

Except.

In most modern conflicts, tools or weapons are generally available to all sides, and cybersecurity is no exception. Any tool that we can use, they can use. Any defense that we can create, they can try to find some way to evade.

Machine learning has the promise of being a powerful cybersecurity tool, but as with any technology, it’s important for us to think about how the adversary may attempt to circumvent it. This gives us the opportunity to strengthen our technology against obvious circumvention, maximizing its initial effectiveness and maximizing the area under the initial part of the “Grobman Curve” (from The Second Economy: The Race for Trust, Treasure and Time in the Cybersecurity War, by Steve Grobman and Allison Cerra).When it comes to machine learning, there are a few possible ways attackers might attempt to gain the upper hand: They can try to identify the model and find its weak points, legitimize bad behavior, or flood the model to make it unusable.

Learn the model and take advantage of it

For some of the simpler models that are in use today, figuring out what the machine is looking for and delivering it with malicious intent is a popular approach. We have seen examples of this at online retailers, where attackers create digital products, such as eBooks, hype them with fake reviews, generate a large number of downloads to increase their visibility using cloud computing resources, and then trick consumers into buying what appear to be popular editions but are really elaborate fakes.

In cybersecurity, spammers have been doing this for some time. Spam filters were initially based on searches for commonly used words and phrases in spam emails. Data was fed in by users marking emails as junk, providing the model with large volumes of data. Early models were easy to trick by inserting punctuation within words, or using recognizable misspellings. As the models became more sophisticated these tricks became more difficult to figure out, and spammers eventually incorporated social engineering into their messages to appear legit.

Legitimize bad behavior

Another potential path is to corrupt the model, so that it considers the malicious activity to be normal. Machine learning algorithms for cybersecurity work over time, continually reviewing the traffic on the network to establish what is normal, what is suspicious, and what is malicious. How do you position yourself on the green side of that line, and appear to be legitimate? If you slowly feed data to something that you know is learning, you can move the line of what is considered normal. Sophisticated adversaries could use cloud-based systems to attack machine learning model development, gradually moving the model so that their desired behavior will be considered normal or benign.

Flood the model

Finally, attackers could flood the model with random or malicious data, to make the model unusable. Microsoft’s machine learning experiment, a Twitter chatbot named Tay, suffered this fate. Tay’s initial model was built on filtered public data. However, after being fed a large diet of racist, misogynist, hate-filled speech, its responses veered well into the inappropriate range within 24 hours.

We will need to consider the security of our machine learning algorithms and protect them from abuse. After all, a model is just a collection of ranges of behavior. If the data being tested is in between these ranges, then look at other variables, quickly running through the math until a decision is reached. If adversaries can rapidly and repeatedly test against the model with specific values, they can potentially find a way around it.

While we promote continuous learning, the models must also be resistant to tampering. Is it possible to build machine learning algorithms that are resilient to poisoning in some way? As the models become increasingly complex, do they become harder to manipulate? These examples raise some useful questions and areas of future research for machine learning, so that we can continue to rely on this emerging technique in cybersecurity.

jon_king
Security Technologist and Intel Principal Engineer

Jon King is a security technologist and Intel Principal Engineer in the Intel Security Office of the CTO, where he is researching and working on future security challenges. Jon is a seasoned developer and architect with over 20 years of industry experience, and a 10-year Intel Security veteran. Jon was the architect of Intel Security's ePolicy Orchestrator product for many years, and he specializes in large distributed systems. He has a Master's degree in parallel and distributed computing from Oregon State University, and a Bachelor's in computer science and physics.

More from this author