AI-powered deception technology speeds deployment, improves results

Aflac says artificial intelligence made its honeypot rollout faster, less complicated, and it produces high-quality alerts. A healthcare facility deploys deception technology for protection during the COVID crisis.

Fraud / deception / social engineering  >  A wolf in sheep's clothing in a binary environment.
Joss Dim / Aleksei Derin / Getty Images

Over the past few weeks, the cybersecurity landscape has changed dramatically. Employees working at home mean more exposed attack surface and plenty of unusual user behavior patterns. And newly deployed remote collaboration platforms might not have been fully vetted yet.

One sector of the cybersecurity industry might help compensate for these new risk factors: deception technology. Formerly known as honeypots — a term that does not Google well — deception technologies sprinkle the environment with fake "accidentally leaked" credentials, decoy databases, and mock servers that are invisible to legitimate users. You then wait for attackers to stumble on them. False positive rates are low, so companies can immediately kick off automated remediation strategies like blocking IP addresses and quarantining infected systems.

This technology may have a bad reputation for manageability and overhead, but artificial intelligence (AI) and machine learning (ML) are eliminating some of the biggest problems, and some companies are already putting it to work.

AI speeds deception technology rollout at Aflac

Insurance giant Aflac, for example began looking at deception technology three years ago and ran proofs of concept with multiple vendors. "What we wanted was a technology that could be attack agnostic," says DJ Goldsworthy, Aflac's director of security operations and threat management, "one that doesn't depend on any signatures or behavioral patterns. One that would detect any type of attack."

Deception was appealing, he says, because it provided a first and last line of defense, helping guard the company against low-level probing all the way up to advanced persistent threats that had somehow already infiltrated company networks. The platform Aflac wound up choosing, Attivo Networks, uses artificial intelligence to build the deception grids and place real-looking decoys through the entire environment — endpoint devices, networks, servers and even cloud infrastructure.

"We want deception to be ubiquitous," Goldsworthy says, "but if you try to do that manually, that would be an insurmountable task."

Artificial intelligence is also used to create a baseline of good traffic. Internal security or management systems or external bots — like those that belong to search engines — might always be scanning the environment. It only took one person less than a month to get the deception system up and running, says Goldsworthy. 

AI was also used to help position the traps in just the right places. "We want machine learning to tell us the various ways adversaries can move around to get to our crown jewels," says Goldsworthy. "We can take remediation actions to divert those flows from reaching vulnerable resources and point them to a decoy system."

Deception alerts the highest priority

Finally, intelligence can be brought to bear to analyze the attacks that fall into the decoy traps so see how they behave and how they were able to infect the enterprise. "Deception is handily our highest fidelity security alert," Goldsworthy says. "If deception triggers, it is almost every time something that needs to be investigated — or is part of a security test."

That alerts can then be handed off to security analysts, along with the context they need to see the attack path and the history of interactions with the system. "That's helpful because that feeds directly into the SIEM and you can cross-correlate to see if there's other activity in the environment, see what else happened," Goldsworthy says.

The alerts can also be used to trigger automated responses with confidence that the problems are real. "We leverage auto-quarantine in our environment," says Goldsworthy. The system can be installed via appliances or virtual machines, he says, and because the decoys are only running when an attacker trips over them, they don't require a lot of computing resources.

Deception technology at scale still a challenge

Aflac's experience with deception isn't necessarily typical. "Creating a digital grid for deception and doing it at scale — we're not seeing big examples of this with our clients," says Anand Rao, partner and global AI leader at PricewaterhouseCoopers.

"The technology is not yet setting the world on fire," says Paula Musich, an analyst with Enterprise Management Associates, in a report released last year. "The few publicly available estimates of the market’s growth rate suggest single- or low double-digit increases." Part of the problem could be that enterprises worry that putting attractive lures out for attackers to find may just put a big bullseye on their companies, she says.

Another problem is that deception grids can be big and complicated. "With honeypots — and all deception — the problem is complexity," says Frank Dickson, research director for worldwide security products at International Data Corp. "And the enemy of security is complexity."

As a result, Dickson says, companies focus their spending and energy on technologies that can provide a clearer security benefit, and that's particularly true for small and medium enterprises that don't have the personnel to handle deception. "It's not whether or not it's effective," he says, "but is it more effective than the other initiatives on my plate? It's not easy to implement; it has its challenges."

That's why IDC itself doesn't have a market size report about deception, Dickson adds. "We decided that there were other priorities we want to cover."

Johannes Ullrich, dean of research at the SANS Technology Institute, says he likes the idea of deception. "It's a great tool for defense," he says. "but it's had a hard time getting into the enterprise. Two years ago, there were a number of companies that took another stab at the problem, to get it out of the honeypot research phase and into something that could be deployed at scale with deployment platforms. Last year, a lot of these deception startups had issues gaining traction."

Maybe adding machine learning and artificial intelligence capabilities will help, he says. "Or, at least, it will get traction with VCs so they can get another round of funding."

Some analysts are more optimistic. Gartner, for example, considers deception a technology that is already having a high impact on security. "One of the reasons that makes deception a good threat detection technology is the low friction aspect," says Gartner analyst Augusto Barros. "In essence, it means that it usually has lower operations hassles and skills issues."

Barros says that the deception market is still relatively small but will grow quickly. He said he saw a 65% growth rate last year, though he epects growth to slow in the future. In addition, according to Gartner, 25% of all threat detection products will embed deception features and functionality by 2022, up from less than 5% today.

GigaOm analyst Simon Gibson says it's a mistake to think of deception technology as a luxury item, because the new platforms are easy to deploy, have low overhead, scale easily, and have few false positives. "It is a technology that almost any enterprise — small, medium or large — could employ to an enormous advantage," he says.

According to Mordor Intelligence, one of the few research firms to put a number to the size of this market, the deception market was valued at $1.2 billion in 2019, which is less than 1% of the total $161 billion cybersecurity market. They expect to see a 13.3% compound annual growth over the next five years, predicting that the market will reach $2.5 billion in 2025.

Will the COVID crisis spur interest in deception technology?

Deception could be a particularly good fit for the "new normal" of the post-pandemic security landscape, experts say. COVID-19 isn't just making people sick. Attackers can smell blood in the water, and they are stepping up all kinds of attacks.

On Wednesday, the US Department of Homeland Security issued an alert warning about new pandemic-related threats from cybercriminals and advanced persistent threat groups. On the same day, Interpol issued an alert about criminals specifically targeting health care institutions.

One healthcare company that is particularly worried about attacks on its employees is now adding deceptive servers to tools that provide secure environments for home-based users. "The majority of staffers are coming in from outside now, and we need deception available to address the risks introduced in this environment," says the company's information security analyst, who did not want to be quoted by name for security reasons.

The healthcare company serves about 25,000 patients in hundreds of facilities across the US and first began using deception last year, with technology from Illusive Networks. "The forensics API provides a breadth of data on real-time source and target forensics," says the information security analyst. That includes the attacker's location and violation in a fully formed security alert. "That significantly cuts research and investigation time," says the analyst.

The tool was simple and easy to deploy — perhaps too simple, the analyst says. The company deployed more than 1,200 deceptive servers, in an environment with only 400 actual servers — and only 16 IP addresses. "It doesn't take long to figure out that something is up," the analyst says. "Our current deployment is based on a lower number that actually increases the hiding of the trap."

Crook-sourcing threat intelligence

One area where honeypots have always been useful is when threat research companies set up the traps to learn about attacker behavior. Now, some researchers are combining user behavior analytics with deception technology to analyze the behaviors of the hackers themselves. "Here is where I think AI-powered honeypots are really interesting," says Liz Miller, VP and principal analyst at Constellation Research.

Honeypots can bring together a group of people who are malicious, a research "control group" of bad actors, Miller says. This can be used by vendors to augment the user behavior analytics technologies many enterprises already have in place. "You can start looking for the behavior of the attacker, rather than just looking at the behavior of the user," she says. Enterprise customers who already have user behavior analytics in place will then get the benefit of the deception-based attacker profiles without any additional effort of their own.

Another cutting-edge area of AI research is that of generative adversarial networks. This is the same technology behind deep fake videos — one AI system creates a fake, another tries to tell if it's fake or not, and the two keep battling it out until the fakes are indistinguishable from the real thing.

This may be a very bad thing for political discourse or for companies who have videos of their CEOs and CFOs out there for hackers to use to create their own fake versions of executives. It could be good for companies trying to create deception grids that fool even the best attackers, says Victor Aranda, principal architect with the security consulting services team at Insight, a technology consulting and system integration firm.

"A lot of malware will try to detect the environment it's in, whether it's being executed in a sandbox, whether it's running on bare metal, whether it's being analyzed," he says. Sophisticated attackers will try to avoid anything that looks like it can be a honeypot, Aranda says.

Adversarial networks can take the deception grids created by today's ML systems and tune them to a point where they're completely indistinguishable to the attackers. "Or the attackers might use machine learning themselves to identify honeypot software and learn its characteristics and how to avoid it in the future," Aranda adds. "There are data scientists working on both sides. This cat and mouse game is never going to stop."

Copyright © 2020 IDG Communications, Inc.

Subscribe today! Get the best in cybersecurity, delivered to your inbox.