As the cybersecurity industry adopts artificial intelligence techniques in earnest, quietly cyber criminals are building their own artificial intelligence adversarial tools. This article explores the likely “first contact” enterprises will face with adversarial AI. Credit: Rog01 The case for artificial intelligence to defend the enterpriseThe case for using artificial intelligence to defend networks against attacks grows as legacy security technologies wane in effectiveness against new attacks. As destructive attacks like WannaCry and NotPetya resurrect worm spreading behaviors, the urgency for automated defense has never been higher. Simply stated, we cannot successfully fight an adversary that works at the speed of machines with defenders looking to identify and stop these attacks in human time and scale.Against this backdrop of increasingly sophisticated threats, new artificial intelligence technology promises to overcome technical challenges with legacy signature-based technologies. For example, we understand that file hash matching works well at detecting known malware, but with most malware attacks not reusing the same file, the signature matching approach has limited utility against detecting current attacks. Machine learning technology such as deep learning algorithms have demonstrated the ability to detect previously unseen malware by training against large volumes of known malware repositories.While malicious file detection is a problem ripe for machine learning technology, we expect that in the future adversaries will depend less on malware and more on exploiting and leveraging existing legitimate programs on endpoints to do their bidding. This will require a different model of detection — one based on identifying patterns of misuse of system resources rather than malicious program patterns. Seeing unusual patterns in large volumes of data is where machine learning excels compared to humans who can intuit well, but not in large scale nor at machine time speeds.Not surprisingly, adversaries are developing their own artificial intelligence techniques. Adversarial artificial intelligence is often used in two scenarios: (1) gaming defensive AI techniques to find and exploit their weaknesses or blind spots, or (2) using AI for offensive cyber operations. In this article, the focus is on using AI for offensive cyber operations. Having said that, developers of defensive AI solutions need to be especially cognizant of failure modes in their approaches such as catastrophic failures due to homogeneous training sets, or simply the law of large numbers that states many pedestrian attacks will get past machine learning approaches because most are simply statistical estimators of a function, not a hard and fast rule.Cyber adversaries already automate several stages of attack, including target discovery and malware generation and deployment. These developments have made defending against attacks more challenging for legacy security systems. However, artificial intelligence now gives adversaries new tools to automate even more parts of the adversary TTPs (tactics, techniques, protocols). Adversaries are now employing artificial intelligence in:Phishing campaignsVulnerability discoveryExploit generationWorkflow automationAcademic studies have shown that you can achieve higher click through rates on phishing emails and tweets using machine learning algorithms over human-generated phishing campaigns. This should not be surprising given the advances in chat bots in dealing with human queries. From an adversarial perspective, why take the time and effort to handcraft a phishing campaign when a machine learning algorithm can do it better, cheaper, and in larger volumes? Traditional means of phishing detection training for humans will likely fail in even higher rates in noticing machine generated phishing campaigns simply because they often look for human errors.Detecting vulnerabilities in programs, the grist for the 0-day vulnerability mill, is ripe for advances in machine learning. Automatic fuzzing tools such as AFL have enabled smarter feedback-based fuzzing using results from prior fuzzing runs in brute force manner. Automated fuzzing has already resulted in the discovery of a number of critical 0-days. Applying AI algorithms to crash dump logs can be used to optimize the generation of better fuzz test cases that can induce exploit rich crashes. While software vendors could use this to find vulnerabilities in their software before releasing, adversaries are motivated by finding and exploiting 0-days. This is not hypothetical — it is a key strategy for advanced nation states in finding and exploiting 0-day exploits in target networks.Zero-days are useful only if they can be exploited. One area that is promising for automation is the development of exploits for heap-based overflows and underflows. Traditionally the process of developing an exploit for a memory allocation vulnerability requires tedious manual work in positioning exploit code relative to the memory stack. A recent talk by Sean Heelan at BlackHat EU demonstrates advances in algorithms that automate this in black-box fashion, which in production, can lead to automatic exploit generation from vulnerability discovery.Finding zero-days is not the only way to compromise systems of course. When it comes to embedded systems and IoT type devices, failure modes and effects from adversarial actions are not well understood and almost never designed for maliciousness. The single fault hypothesis is used in the design and simulation for most embedded and safety critical systems. Simultaneous failures in different components, like those caused by attacks are not usually modeled and there is little contingency planned for these adversarial scenarios. Analysis of fault trees with automated reasoning can identify optimal points of a system to attack simultaneously, for instance to create a failure mode while masking it to operators and sensors. Expect these techniques to be used in self-driving cars, industrial control systems, and plants by capable and motivated adversaries.Finally, the stages of attack associated with adversary TTPs, often called the cyber killchain, is a repeatable workflow where the variance is due to particular individual target networks. This workflow is ripe for discovery and automation, similar to how a robot AI completes tasks while overcoming obstacles. The payoff for automating adversary TTPs from exploit to discovery, privilege escalation, data capture and exfiltration is scalable hacking in machine time. While adversaries are still in early stages of employing artificial intelligence, the profit motive and scalability of AI makes it likely that adversaries will leap ahead of defenders in terms of developing and employing AI for their purposes.If anything, this means that defenders will have to move even faster to automate defenses that operate at machine speed in order to counter offensive AI algorithms. In other words, expect future battles to be AI on AI where the teams that develop the best AI algorithms with the best training sets wins. Defending against AI-based attacksWhile adversarial AI may sound increasingly discouraging for defenders, the challenge is not hopeless. Machine learning algorithms today are being incorporated in products to detect unknown malware. Artificial intelligence algorithms adversaries are using to find exploitable software vulnerabilities can be employed by software vendors to get in front of adversaries by finding and fixing vulnerabilities in their products before releasing them. Likewise, adversary TTPs are fairly regular. By observing, collecting, and analyzing data, one can detect attack patterns across large data sets and devices using AI algorithms.The most important takeaway for defenders is that if you are not already down the path of developing or incorporating AI in your defenses, then you are falling behind adversaries. The strategic advantages in scaling and cost mean adversaries will be adopting AI for their purposes. We need to be thinking similarly on the defensive side. Related content opinion How elections are hacked via social media profiling What to expect in the 2018 midterm elections and how malvertising tactics will be used to target voters. By Anup Ghosh May 31, 2018 9 mins Hacking Social Engineering Social Networking Apps Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe