The future of red teaming: Computer robots face off in adversarial rounds

Computer robots face off in a series of adversarial rounds, reveal the future of red teaming

If you were at BSides and you caught the presentation from Endgame's principal security data scientist, Hyrum Anderson, you were likely wowed by the innovative dueling defender and adversary demonstration. If you missed it, Anderson gave me a run-down of his presentation.

Listening to Anderson detail the machine learning techniques both fascinated and frightened me. Suddenly the predictions made in "The Age of the Spiritual Machine" by Ray Kurzwell are feeling far too real.

Red Teaming usually entails a team of people coming in to manually simulate a hack. With machine learning techniques, however, you can dramatically increase a system's defenses in a fraction of the time. The technique, said Anderson, will also rely on the combined machine and human intelligence though.

“Think of it as two computer robots — one defender and one adversary — facing off in a series of rounds. During each round, the adversary tries to bypass the defender, and the defender subsequently learns how to identify the impostors. During this process, the adversary’s ability to produce samples to bypass defenses improves while the defender simultaneously becomes better able to resist attacks,” Anderson said.

[ ALSO ON CSO: Red Team Versus Blue Team: How to Run an Effective Simulation ]

The future of cybersecurity, said Anderson, lies in machine learning red teams, a technique that has been used across other industries, but hasn't been fully tapped yet for cybersecurity. “What we’ve seen is people doing vulnerability testing and analysis on a machine learning level. They release a product into the wild and attack it. They anticipate acting like an adversary, then patch the vulnerabilities,” Anderson continued.

The thesis of his talk was that they can supplement red teaming with another machine learning model that can scale millions of samples in a fraction of the time. “A red team machine learning offensive product won’t have the experience and intelligence of a human but will have scale. During this process the red team learns how to sneak past the blue team and the blue team learns how to catch the adversary. As they get better, the red team learns how to generate sneakier malware,” Anderson said.

The common example of red teaming is trying to detect malware by network traffic, specifically DNS requests. “When malware is dropped, it wants to call home. If we know what the server is, we can black list that and not let anything go there,” said Anderson. The problem is that the malware generates domain names in a random way, and it generates tens of thousands of domain names.

“A malicious actor only needs to register one and he’s established a back door. He only has to do one for the malware to win. This can be done in a predetermined order by malware,” said Anderson.

What security practitioners want is to detect these domains. But, as they are looking at the domain requests come across the wire, can they determine if they are legitimate just by looking at the name?

“Domain generation algorithms detect them by name only,” said Anderson. “The way we approach with our model is using red team/blue team. We will find all of the malicious domain names and then train a machine learning model to tell the difference between the top used and the giant bag of those created by malware,” Anderson said.

Yes, all the domains are always changing, but the hope of machine learning is that it doesn’t memorize but generalize patterns so that when it sees one that it’s never seen before, it will detect an anomaly.

While it all sounds fascinating, Anderson reminded me that there is no silver bullet in security, especially since data science has come into such fashion across the industry.

“In some sense it is being trumpeted as a silver bullet. All these products have vulnerabilities and deep learning also has weaknesses. There is the possibility of learning and training a model but that also makes it very well suited for an adversary to exploit,” Anderson said.

Whether that adversary will be a man or a machine might change as we grow more deeply intertwined with machines in the future. 

This article is published as part of the IDG Contributor Network. Want to Join?

Insider: These ransomware situations can result in colossal outcomes
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies