Americas

  • United States

Asia

Oceania

kacyzurkus
Writer

The future of red teaming: Computer robots face off in adversarial rounds

News
Aug 03, 20164 mins
Application SecurityIT JobsSoftware Development

If you were at BSides and you caught the presentation from Endgame‘s principal security data scientist, Hyrum Anderson, you were likely wowed by the innovative dueling defender and adversary demonstration. If you missed it, Anderson gave me a run-down of his presentation.

Listening to Anderson detail the machine learning techniques both fascinated and frightened me. Suddenly the predictions made in “The Age of the Spiritual Machine” by Ray Kurzwell are feeling far too real.

Red Teaming usually entails a team of people coming in to manually simulate a hack. With machine learning techniques, however, you can dramatically increase a system’s defenses in a fraction of the time. The technique, said Anderson, will also rely on the combined machine and human intelligence though.

"Think of it as two computer robots -- one defender and one adversary -- facing off in a series of rounds. During each round, the adversary tries to bypass the defender, and the defender subsequently learns how to identify the impostors. During this process, the adversary's ability to produce samples to bypass defenses improves while the defender simultaneously becomes better able to resist attacks," Anderson said.

[ ALSO ON CSO: Red Team Versus Blue Team: How to Run an Effective Simulation ]

The future of cybersecurity, said Anderson, lies in machine learning red teams, a technique that has been used across other industries, but hasn’t been fully tapped yet for cybersecurity. "What we've seen is people doing vulnerability testing and analysis on a machine learning level. They release a product into the wild and attack it. They anticipate acting like an adversary, then patch the vulnerabilities," Anderson continued.

The thesis of his talk was that they can supplement red teaming with another machine learning model that can scale millions of samples in a fraction of the time. "A red team machine learning offensive product won't have the experience and intelligence of a human but will have scale. During this process the red team learns how to sneak past the blue team and the blue team learns how to catch the adversary. As they get better, the red team learns how to generate sneakier malware," Anderson said.

The common example of red teaming is trying to detect malware by network traffic, specifically DNS requests. "When malware is dropped, it wants to call home. If we know what the server is, we can black list that and not let anything go there," said Anderson. The problem is that the malware generates domain names in a random way, and it generates tens of thousands of domain names.

"A malicious actor only needs to register one and he's established a back door. He only has to do one for the malware to win. This can be done in a predetermined order by malware," said Anderson.

What security practitioners want is to detect these domains. But, as they are looking at the domain requests come across the wire, can they determine if they are legitimate just by looking at the name?

"Domain generation algorithms detect them by name only," said Anderson. "The way we approach with our model is using red team/blue team. We will find all of the malicious domain names and then train a machine learning model to tell the difference between the top used and the giant bag of those created by malware," Anderson said.

Yes, all the domains are always changing, but the hope of machine learning is that it doesn't memorize but generalize patterns so that when it sees one that it's never seen before, it will detect an anomaly.

While it all sounds fascinating, Anderson reminded me that there is no silver bullet in security, especially since data science has come into such fashion across the industry.

"In some sense it is being trumpeted as a silver bullet. All these products have vulnerabilities and deep learning also has weaknesses. There is the possibility of learning and training a model but that also makes it very well suited for an adversary to exploit," Anderson said.

Whether that adversary will be a man or a machine might change as we grow more deeply intertwined with machines in the future. 

kacyzurkus
Writer

Kacy Zurkus is a freelance writer for CSO and has contributed to several other publications including The Parallax, Meetmindful.com and K12 Tech Decisions. She covers a variety of security and risk topics as well as technology in education, privacy and dating. She has also self-published a memoir, Finding My Way Home: A Memoir about Life, Love, and Family under the pseudonym "C.K. O'Neil."

Zurkus has nearly 20 years experience as a high school teacher on English and holds an MFA in Creative Writing from Lesley University (2011). She earned a Master's in Education from University of Massachusetts (1999) and a BA in English from Regis College (1996). Recently, The University of Southern California invited Zurkus to give a guest lecture on social engineering.

The opinions expressed in this blog are those of Kacy Zurkus and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author