The Top 3 AI Myths in Cybersecurity

AI ought to be a tool to that aids cybersecurity teams working to catch malicious actors. However, a Devo commissioned Wakefield Research report found there’s still work to be done.

istock 1182697690 1
iStock

Whether it’s in novels, or the movies based on them, artificial intelligence has been a subject of fascination for decades. The synthetic humans envisioned by Philip K. Dick remain (fortunately) the stuff of science fiction, artificial intelligence is real and playing an increasingly large role in many aspects of our lives.

While it’s fun to root against (or maybe for) human-like robots with AI brains, a much more mundane, but equally powerful form of AI is starting to play a role in cybersecurity.

The goal is for AI to be a force multiplier for hardworking security professionals. Security operations center (SOC) analysts, as we saw in the most recent Devo SOC Performance Report™, are often overwhelmed by the never-ending number of alerts that hit their screens each day. Alert fatigue has become an industry-wide cause of analyst burnout.

Ideally, AI could help SOC analysts keep pace with (and stay ahead of) clever and relentless threat actors who are using AI effectively for criminal or espionage purposes. But unfortunately, that doesn’t seem to be happening yet.

The Big AI Lie

Devo commissioned Wakefield Research to conduct a survey of 200 IT security professionals to determine how they feel about AI. The survey covers AI implementations that comprise a gamut of defensive disciplines including threat detection, breach risk prediction, and incident response/management.

AI is supposed to be a force multiplier for cybersecurity teams desperately working to catch up to savvy malicious actors, talent shortages, and more. However, not all AI is that intelligent, and that’s even before we account for mismatches in needs and capabilities.

Myth #1: AI-Powered Cybersecurity is Already Here

All  survey respondents said their organization is using AI in one or more areas. The top usage area is IT asset inventory management, followed by threat detection (which is encouraging to see), and breach risk prediction.

But in terms of leveraging AI directly in the battle against threat actors, it’s not much of a fight at this point. Some 67% of survey respondents said their organization’s use of AI “barely scratches the surface.”

Take a look at how respondents feel about their organization’s reliance on AI in their cybersecurity program.

chart 1 Client supplied

More than half of respondents believe their organization is — at least currently — relying too much on AI. Less than one-third think the reliance on AI is appropriate, while a minority of respondents think their organization isn’t doing enough with AI.

Myth #2: AI Will Solve Security Problems

When asked for their thoughts about the challenges posed by AI use in their organizations, respondents weren’t shy. Just 11% of respondents said they haven’t experienced any problems using AI for cybersecurity. The vast majority of respondents see things quite differently.

chart 2 Client supplied

When asked where in their organization’s security stack AI-related challenges occurred, core cybersecurity functions did not fare well. While IT asset inventory management was the top AI problem area, according to 53% of respondents, three cybersecurity categories also received less-than-stellar responses:

  • Threat detection (33%)
  • Understanding cybersecurity strengths and gaps (24%)
  • Breach risk prediction (23%)

It’s interesting to note that incident response was cited by far fewer respondents (13%) for posing AI-related challenges.

Myth #3: AI is Intelligent, so It Must Be Effective

It seems clear that while AI already is being used in cybersecurity, the results are mixed. The AI Big Lie is that not all AI is as “intelligent” as the name implies, and that’s even before accounting for mismatches in organizational needs and capabilities.

The cybersecurity industry has long fixated on seeking ‘silver bullet’ solutions. AI is the latest one. Organizations must be deliberate and results-driven in how they evaluate and deploy AI solutions. Unless SOC teams combine AI with experienced, experts steeped in the technology, they risk failure in a critical area with little to no margin for error.

Organizations must be sure to work with experienced experts in AI technology or they risk failure in a critical area with little to no margin for error. Learn more at Devo.com.

Copyright © 2022 IDG Communications, Inc.