Americas

  • United States

Asia

Oceania

anupghosh
Contributor

Preparing for the looming battle of AI bots

Opinion
Mar 16, 20186 mins
Artificial IntelligenceData and Information SecurityTechnology Industry

As the cybersecurity industry adopts artificial intelligence techniques in earnest, quietly cyber criminals are building their own artificial intelligence adversarial tools. This article explores the likely “first contact” enterprises will face with adversarial AI.

The case for artificial intelligence to defend the enterprise

The case for using artificial intelligence to defend networks against attacks grows as legacy security technologies wane in effectiveness against new attacks. As destructive attacks like WannaCry and NotPetya resurrect worm spreading behaviors, the urgency for automated defense has never been higher. Simply stated, we cannot successfully fight an adversary that works at the speed of machines with defenders looking to identify and stop these attacks in human time and scale.

Against this backdrop of increasingly sophisticated threats, new artificial intelligence technology promises to overcome technical challenges with legacy signature-based technologies. For example, we understand that file hash matching works well at detecting known malware, but with most malware attacks not reusing the same file, the signature matching approach has limited utility against detecting current attacks. Machine learning technology such as deep learning algorithms have demonstrated the ability to detect previously unseen malware by training against large volumes of known malware repositories.

While malicious file detection is a problem ripe for machine learning technology, we expect that in the future adversaries will depend less on malware and more on exploiting and leveraging existing legitimate programs on endpoints to do their bidding. This will require a different model of detection — one based on identifying patterns of misuse of system resources rather than malicious program patterns. Seeing unusual patterns in large volumes of data is where machine learning excels compared to humans who can intuit well, but not in large scale nor at machine time speeds.

Not surprisingly, adversaries are developing their own artificial intelligence techniques.

Adversarial artificial intelligence is often used in two scenarios: (1) gaming defensive AI techniques to find and exploit their weaknesses or blind spots, or (2) using AI for offensive cyber operations. In this article, the focus is on using AI for offensive cyber operations. Having said that, developers of defensive AI solutions need to be especially cognizant of failure modes in their approaches such as catastrophic failures due to homogeneous training sets, or simply the law of large numbers that states many pedestrian attacks will get past machine learning approaches because most are simply statistical estimators of a function, not a hard and fast rule.

Cyber adversaries already automate several stages of attack, including target discovery and malware generation and deployment. These developments have made defending against attacks more challenging for legacy security systems. However, artificial intelligence now gives adversaries new tools to automate even more parts of the adversary TTPs (tactics, techniques, protocols).

Adversaries are now employing artificial intelligence in:

  • Phishing campaigns
  • Vulnerability discovery
  • Exploit generation
  • Workflow automation

Academic studies have shown that you can achieve higher click through rates on phishing emails and tweets using machine learning algorithms over human-generated phishing campaigns. This should not be surprising given the advances in chat bots in dealing with human queries. From an adversarial perspective, why take the time and effort to handcraft a phishing campaign when a machine learning algorithm can do it better, cheaper, and in larger volumes? Traditional means of phishing detection training for humans will likely fail in even higher rates in noticing machine generated phishing campaigns simply because they often look for human errors.

Detecting vulnerabilities in programs, the grist for the 0-day vulnerability mill, is ripe for advances in machine learning. Automatic fuzzing tools such as AFL have enabled smarter feedback-based fuzzing using results from prior fuzzing runs in brute force manner. Automated fuzzing has already resulted in the discovery of a number of critical 0-days. Applying AI algorithms to crash dump logs can be used to optimize the generation of better fuzz test cases that can induce exploit rich crashes. While software vendors could use this to find vulnerabilities in their software before releasing, adversaries are motivated by finding and exploiting 0-days. This is not hypothetical — it is a key strategy for advanced nation states in finding and exploiting 0-day exploits in target networks.

Zero-days are useful only if they can be exploited. One area that is promising for automation is the development of exploits for heap-based overflows and underflows. Traditionally the process of developing an exploit for a memory allocation vulnerability requires tedious manual work in positioning exploit code relative to the memory stack. A recent talk by Sean Heelan at BlackHat EU demonstrates advances in algorithms that automate this in black-box fashion, which in production, can lead to automatic exploit generation from vulnerability discovery.

Finding zero-days is not the only way to compromise systems of course. When it comes to embedded systems and IoT type devices, failure modes and effects from adversarial actions are not well understood and almost never designed for maliciousness. The single fault hypothesis is used in the design and simulation for most embedded and safety critical systems. Simultaneous failures in different components, like those caused by attacks are not usually modeled and there is little contingency planned for these adversarial scenarios. Analysis of fault trees with automated reasoning can identify optimal points of a system to attack simultaneously, for instance to create a failure mode while masking it to operators and sensors. Expect these techniques to be used in self-driving cars, industrial control systems, and plants by capable and motivated adversaries.

Finally, the stages of attack associated with adversary TTPs, often called the cyber killchain, is a repeatable workflow where the variance is due to particular individual target networks. This workflow is ripe for discovery and automation, similar to how a robot AI completes tasks while overcoming obstacles. The payoff for automating adversary TTPs from exploit to discovery, privilege escalation, data capture and exfiltration is scalable hacking in machine time.

While adversaries are still in early stages of employing artificial intelligence, the profit motive and scalability of AI makes it likely that adversaries will leap ahead of defenders in terms of developing and employing AI for their purposes.

If anything, this means that defenders will have to move even faster to automate defenses that operate at machine speed in order to counter offensive AI algorithms. In other words, expect future battles to be AI on AI where the teams that develop the best AI algorithms with the best training sets wins. 

Defending against AI-based attacks

While adversarial AI may sound increasingly discouraging for defenders, the challenge is not hopeless. Machine learning algorithms today are being incorporated in products to detect unknown malware. Artificial intelligence algorithms adversaries are using to find exploitable software vulnerabilities can be employed by software vendors to get in front of adversaries by finding and fixing vulnerabilities in their products before releasing them. Likewise, adversary TTPs are fairly regular. By observing, collecting, and analyzing data, one can detect attack patterns across large data sets and devices using AI algorithms.

The most important takeaway for defenders is that if you are not already down the path of developing or incorporating AI in your defenses, then you are falling behind adversaries. The strategic advantages in scaling and cost mean adversaries will be adopting AI for their purposes. We need to be thinking similarly on the defensive side.

anupghosh
Contributor

Anup Ghosh was Founder and CEO at Invincea, Inc, a machine-learning cybersecurity company, until Invincea was acquired by Sophos in March 2017. Prior to founding Invincea, he was a Program Manager at the Defense Advanced Research Projects Agency (DARPA) where he created and managed an extensive portfolio of cybersecurity programs.

He has previously held roles as Chief Scientist in the Center for Secure Information Systems at George Mason University and as Vice President of Research at Cigital, Inc. Anup has published more than 40 peer-reviewed articles in cyber security journals.

He is a frequent on-air contributor to CNN, CNBC, NPR, ABC World News, and Bloomberg TV. A number of major media outlets carry his commentaries on cyber security issues including the Wall Street Journal, New York Times, Forbes, Associated Press, FoxNews, CSM Passcode, Federal Times, Market Watch and USA Today.

He has served as a member of the Air Force Scientific Advisory Board and the Naval Studies Board, informing the future of American cyberdefenses.

The opinions expressed in this blog are those of Anup Ghosh and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.