When it comes to artificial intelligence (AI) and security, InfoSec and IT teams have raised skepticism to an art form. After all, we've been hearing since the late 1980s that AI is coming. Given that most vendors have slipped AI and machine learning (ML) terminology into their marketing speak, it seems like it's much more of the same: selling the future, not the present.
There may be a germ of truth in that. But, in reporting this story, we were hard-pressed to find experts who aren't firmly convinced that AI is about to have a profound effect on information security. Experts interviewed disagreed about the timeline. Some wondered why we haven't already seen cybercriminals wielding AI in powerful ways. Others see it as more like three to five years out. All but one saw it as security's next big challenge.
Tom Koulopoulos, chairman, Delphi Group and advisor to Wasabi Technologies, said, looking out three years, "we start to see AI used to automate highly personalized attacks that use a combination of behavioral data and patterns of online interaction to hyper-target individuals." Consider the phishing emails you get today in comparison to what they might be in a few years. "Imagine phishing attacks that reference actual interactions with someone you know," he added, giving this example of an AI-boosted phishing attack message: "Hey, it was great to see you yesterday at Starbucks! I know you're interested in additional travel to the Mediterranean since your trip to Crete last summer. You might want to check out this offer I came across...."
Think that's creepy? Well, strap in, cyber-citizens. Koulopoulos warned that "it barely scratches the surface of the threats that AI will be able to mine from the digital patterns of our behaviors and the interaction data we create with a rapidly growing [number of apps and] devices connected to the internet of things (IoT)."
Automating attacks and evading detection
Ganesh Krishnan, co-founder and CTO, Avid Secure, and former senior InfoSec exec at companies like Yahoo and Atlassian, offers a different example. "A smarter attack would involve installing malware that learns about the environment it's running in, discovers how to move laterally, evades detection and exfiltrates data in a way that's difficult to distinguish from normal behavior," thereby defeating detection and monitoring tools.
Umesh Yerram, chief data protection officer at pharmaceutical company AmerisourceBergen, concurs. "Once AI technologies are widely available, cybercriminals will be able to launch a new wave of sophisticated attacks that may evade most traditional security-detection and monitoring tools."
Then there's the issue of scale. Today's complex security breaches are typically not automated, and significant effort and sophistication on the part of attackers is required to breach and extract data. But people need to eat and sleep. They can't work around the clock. "Fast-forward to AI, and we're looking at intelligent machine-to-machine attacks that can operate 24 x 7 x 365," said Niall Browne, CISO and SVP of trust and security, Domo. "Cybercriminal AI systems will make millions of intuitive decisions per second about the best way to infiltrate and compromise your data." The scary part, Browne added, is that outlaw AI systems of the future "will be capable of attacking not only one company but hundreds or thousands of organizations concurrently. It's a frightening landscape."
Most of the experts we spoke with didn't believe the targets would change when AI makes its presence felt. Browne predicted, however, that the focus of cyberattacks will shift "from simply stealing data and shutting down systems to more complex objectives, including manipulation." In other words, convincing a system, person or company to act in a certain way by altering the data that they see. Browne added, "it might be used to manipulate stock prices, cause currency fluctuation, obscure the result of nuclear reactor testing or influence the results of an election."
Looking three years out, "I can imagine the better organized or nation-state cybercrime organizations using AI for synthetically generating and creating new attack vectors," said Pascal Geenens, EMEA security evangelist, Radware. The whole field of AI should come into play, he added. "Where previously machine learning has been the primary domain leveraged for automating attacks, AI systems such as genetic algorithms and reinforced learning will be used for generating new attack vectors and systematically breaching all kinds of systems, whether cloud, IoT or industrial IoT/SCADA. When cybercriminals use this in combination with automation, we will encounter a fully automated ecosystem that will hack, crack and improve itself over time, with no limits on scale or endurance." It would be a continuously evolving attack system.
Geenens sees cybercriminals changing roles, switching away from performing the real attack to becoming maintainers and developers of automated AI hacking machines. Machines will do the hacking; people will improve the efficiency of the machines.
Advice for CSOs and CISOs
Security experts are sure that InfoSec will need to fight fire with fire, meaning that AI will be at least as important in defending against AI-powered attacks as it's believed it will be for waging them. Even so, there are very few examples of how AI might be used in that way. Some companies are keeping this very close to the chest. "Getting people to talk about case studies on this topic is like getting the Vatican to give you directions to the Holy Grail," Koulopoulos joked.
What should security leaders do to prepare? AmerisourceBergen's Yerram said "CSOs and CISOs should quickly embrace AI-based technologies. We will see AI-based threats sooner rather than later. Now is the time for CSOs and CISOs to lead this AI-based security controls evolution to secure their enterprises."
"The whole game will be about automating the detection of new, increasingly complex and continuously adapting threats," Geenens said. "You cannot stop what you cannot detect." With a probable new wave of AI-enhanced threats expected to begin emerging over the next two years, the initial focus should be on using AI to ensure detection of AI-based threats. Buying and/or developing such a system should be a top priority.
Sooner or later your company will be forced to invest in AI technology to protect against AI-based threats. Things will go better for you if you don't wait until you absolutely must because wielding AI is not like turning on a switch of a black box. It's going to require time and money to explore and learn. Cybersecurity professionals will need to enhance their skills and bone up on AI, Yerram noted.
Geenens pointed out that "the types of environments that will be vulnerable and more likely to be targeted by automated and AI-powered attacks are those that have undergone a digital transformation."
At enterprises around the world, AI will play a critical role in helping to address both the shortage of cybersecurity professionals and the cybersecurity skills gap. AI-based technologies along with robotic process automation technologies will augment information security teams to address lower-level security threats (like ransomware, malware and crypto-mining) so that senior cybersecurity professionals can focus on the new breed of sophisticated threats.
Keep an eye on source-code vulnerabilities, and make sure your cloud suppliers are doing the same. Geenens sees the use of deep learning (a subset of machine learning) to identify new code vulnerabilities. "Cloud platforms running open-source software will become a primary target for vulnerability scanning," he said. "Closed source software isn't any less vulnerable to automated attacks."
Something to think about: Koulopoulos suggested that we need to "create universal identity mechanisms for humans. One of the greatest risks during the rise of AI is the inability to distinguish it from a human. Without a fool-proof identity check, our vulnerability to AI-driven cybercrime will be inescapable."
Ups and downs
The hard truth is that no one can predict the future. Two perspectives illustrate the extreme range of reaction to what's coming in the form of AI-assisted threats:
"The good guys have more employees, better compute power, a stronger understanding of networking and hardware, and relationships with content delivery networks and cloud hosts ...but this 'Spy vs. Spy' game never changes; only the [means] of engagement change," said Chris Kissel, IDC research director, worldwide security products.
"[Many] facets of human interaction now rely on technology. AI will be exponentially more intelligent than a human, will never sleep, will naturally be bent on self-preservation, and most frightening of allhas no morality. Once AI comes into existence, it will have tremendous abilities with no moral compass. It could easily use our technology against us to rip apart the fabric of modern life," said Niall Browne.
Here's your chief takeaway. Whether you believe all the predictions you just read isn't as important as this: Heed the emotion. The overall tenor of these various perspectives might be summed up as "this is a dire situation." Don't dismiss it or leave it on the back burner.
More on AI and automation in security
- The future of computer security is machine vs machine
- AI in cybersecurity: what works and what doesn't
- Keeping pace with security automation
- Automation is the key to mitigation of today’s cyber threats
- Preparing for the looming battle of AI bots
- 6 ways hackers will use machine learning to launch attacks