Cyber threats fueled by AI: Security's next big challenge

Security has always been an arms race. But the addition of AI is like jumping from tomahawks to Tomahawk missiles.

Become An Insider

Sign up now and get FREE access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content. Learn more.

When it comes to artificial intelligence (AI) and security, InfoSec and IT teams have raised skepticism to an art form. After all, we've been hearing since the late 1980s that AI is coming. Given that most vendors have slipped AI and machine learning (ML) terminology into their marketing speak, it seems like it's much more of the same: selling the future, not the present.

There may be a germ of truth in that. But, in reporting this story, we were hard-pressed to find experts who aren't firmly convinced that AI is about to have a profound effect on information security. Experts interviewed disagreed about the timeline. Some wondered why we haven't already seen cybercriminals wielding AI in powerful ways. Others see it as more like three to five years out. All but one saw it as security's next big challenge.

Tom Koulopoulos, chairman, Delphi Group and advisor to Wasabi Technologies, said, looking out three years, "we start to see AI used to automate highly personalized attacks that use a combination of behavioral data and patterns of online interaction to hyper-target individuals." Consider the phishing emails you get today in comparison to what they might be in a few years. "Imagine phishing attacks that reference actual interactions with someone you know," he added, giving this example of an AI-boosted phishing attack message: "Hey, it was great to see you yesterday at Starbucks! I know you're interested in additional travel to the Mediterranean since your trip to Crete last summer. You might want to check out this offer I came across...."

Think that's creepy? Well, strap in, cyber-citizens. Koulopoulos warned that "it barely scratches the surface of the threats that AI will be able to mine from the digital patterns of our behaviors and the interaction data we create with a rapidly growing [number of apps and] devices connected to the internet of things (IoT)."

Automating attacks and evading detection

Ganesh Krishnan, co-founder and CTO, Avid Secure, and former senior InfoSec exec at companies like Yahoo and Atlassian, offers a different example. "A smarter attack would involve installing malware that learns about the environment it's running in, discovers how to move laterally, evades detection and exfiltrates data in a way that's difficult to distinguish from normal behavior," thereby defeating detection and monitoring tools.

Umesh Yerram, chief data protection officer at pharmaceutical company AmerisourceBergen, concurs. "Once AI technologies are widely available, cybercriminals will be able to launch a new wave of sophisticated attacks that may evade most traditional security-detection and monitoring tools."

Then there's the issue of scale. Today's complex security breaches are typically not automated, and significant effort and sophistication on the part of attackers is required to breach and extract data. But people need to eat and sleep. They can't work around the clock. "Fast-forward to AI, and we're looking at intelligent machine-to-machine attacks that can operate 24 x 7 x 365," said Niall Browne, CISO and SVP of trust and security, Domo. "Cybercriminal AI systems will make millions of intuitive decisions per second about the best way to infiltrate and compromise your data." The scary part, Browne added, is that outlaw AI systems of the future "will be capable of attacking not only one company but hundreds or thousands of organizations concurrently. It's a frightening landscape."

To continue reading this article register now

SUBSCRIBE! Get the best of CSO delivered to your email inbox.