Last summer at the Black Hat cybersecurity conference, the DARPA Cyber Grand Challenge pitted automated systems against one another, trying to find weaknesses in the others' code and exploit them.
"This is a great example of how easily machines can find and exploit new vulnerabilities, something we'll likely see increase and become more sophisticated over time," said David Gibson, vice president of strategy and market development at Varonis Systems.
His company hasn't seen any examples of hackers leveraging artificial intelligence technology or machine learning, but nobody adopts new technologies faster than the sin and hacking industries, he said.
"So it's safe to assume that hackers are already using AI for their evil purposes," he said.
The genie is out of the bottle.
"It has never been easier for white hats and black hats to obtain and learn the tools of the machine learning trade," said Don Maclean, chief cybersecurity technologist at DLT Solutions. "Software is readily available at little or no cost, and machine learning tutorials are just as easy to obtain."
Take, for example, image recognition.
It was once considered a key focus of artificial intelligence research. Today, tools such as optical character recognition are so widely available and commonly used that they're not even considered to be artificial intelligence anymore, said Shuman Ghosemajumder, CTO at Shape Security.
"People don't see them as having the same type of magic as it has before," he said. "Artificial intelligence is always what's coming in the future, as opposed to what we have right now."
Today, for example, computer vision is good enough to allow self-driving cars to navigate busy streets.
And image recognition is also good enough to solve the puzzles routinely presented to website users to prove that they are human, he added.
For example, last spring, Vinay Shet, the product manager for Google's Captcha team, told Google I/O conference attendees that in 2014, they had a distorted text Captcha that only 33 percent of humans could solve. By comparison, the state-of-the-art OCR systems at the time could already solve it with 99.8 percent accuracy.
The criminals are already using image recognition technology, in combination with "Captcha farms," to by-pass this security measure, said Ghosemajumder. The popular Sentry MBA credential stuffing tool has it built right in, he added.
So far, he said, he hasn't seen any publicly available tool kits based on machine learning that are designed to bypass other security mechanisms.
But there are indirect indicators that criminals are starting to use this technology, he added.
For example, companies already know that if there's an unnaturally large amount of traffic from one IP address, that there's a high chance it's malicious, so criminals use botnets to bypass those filters, and the defenders look for more subtle indications that the traffic is automated and not human, he said.
They can't just add in more randomness since human behavior is not actually random, he said. Spotting subtle patterns in large amount of data is exactly what machine learning is good at -- and what the criminals need to do in order to effectively mimic human behavior.
Smarter email scams
According to the McAfee Labs 2017 Threats Predictions report, cyber-criminals are already using machine learning to target victims for Business Email Compromise scams, which have been escalating since early 2015.
Steve Grobman, Intel Security CTO at Intel
"What artificial intelligence does is it lets them automate the tailoring of content to the victim," said Steve Grobman, Intel Security CTO at Intel, which produced the report. "Another key area where bad actors are able to use AI is in classification problems. AI is very good at classifying things into categories."
For example, the hackers can automate the process of finding the most likely victims.
The technology can also be used to help attackers stay hidden inside corporate networks, and to find vulnerable assets.
Identifying specific cases where AI or machine learning is used can be tricky, however.
"The criminals aren't too open about explaining exactly what their methodology is," he said. And he isn't aware of hard evidence, such as computers running machine learning models that were confiscated by law authorities.
"But we've seen indicators that this sort of work is happening," he said. "There are clear indications that bad actors are starting to move in this direction."
Sneaker malware and fake domains
Security providers are increasingly using machine learning to tell good software from bad, good domains from bad.
Now, there are signs that the bad guys are using machine learning themselves to figure out what patterns the defending systems are looking for, said Evan Wright, principal data scientist at Anomali.
"They'll test a lot of good software and bad software through anti-virus, and see the patterns in what the [antivirus] engines spot," he said.
Similarly, security systems look for patterns in domain generation algorithms, so that they can better spot malicious domains.
"They try to model what the good guys are doing, and have their machine learning model generate exceptions to those rules," he said.
Again, there's little hard evidence that this is actually happening.
"We've seen intentional design in the domain generation algorithms to make it harder to detect it," he said. "But they could have done that in a few different ways. It could be experiential. They tried a few different ways, and this worked."
Or they could have been particularly intuitive, he said, or hired people who previously worked for the security firms.
One indicator that an attack is coming from a machine, and not a clever -- or corrupt -- human being, is the scale of the attack. Take, for example, a common scam in which fake dating accounts are created in order to lure victims to prostitution services.
The clever part isn't so much the automated conversation that the bot has with the victim, but the way that the profiles are created in the first place.
"It needs to create a profile dynamically, with a very attractive picture from Facebook, and an attractive occupation, like flight attendant or school teacher," said Omri Iluz, CEO and co-founder at PerimeterX.
Each profile is unique, yet appealing, he said.
"We know that it's not just automation because it's really hard," he said. "We ruled out manual processes just by sheer volume. And we also don't think they're rolling out millions of profiles and doing natural selection because it would be identified by the dating platform. These are very smart pieces of software."
Scalpers do something similar when they automatically buy tickets to resell at a profit.
"They need to pick the item that they know will get them a high value on the secondary market," he said. "And they can't do it manually because there's no time. And it can't be a numbers game because they can't simply buy all the inventories because then they'll be losing money. There's intelligence behind it."
The profits from these activities more than pay for the research and development, he said.
"When we look at the revenues these fraudsters generated, it's bigger than many real companies," he said. "And they don't need to kill anyone, or do something risky like deal drugs."
Getting ready for the Turing Test
In limited, specific applications, computers are already passing the Turing Test -- the classic thought experiment in which humans try to decide whether they're talking to another human, or to a machine.
The best defense against these kinds of attacks, said Intel's Grobman, is a focus on fundamentals.
"Most companies are still struggling with even moderate attack scenarios," he said. "Right now, the most important thing that companies can do is ensure they have a strong technical infrastructure and continue practicing simulations and red team attacks."