• United States



UK Editor

Survey reveals mass concern over generative AI security risks

Jun 27, 20233 mins
Generative AISecurity

Malwarebytes survey reveals 81% of people are concerned about the security risks posed by ChatGPT and generative AI, while just 7% think they will improve internet security.

ChatGPT R, robotic hand typing on keyboard
Credit: Andrey_Popov/Shutterstock

A new Malwarebytes survey has revealed that 81% of people are concerned about the security risks posed by ChatGPT and generative AI. The cybersecurity vendor collected a total of 1,449 responses from a survey in late May, with 51% of those polled questioning whether AI tools can improve internet safety and 63% distrusting ChatGPT information. What's more, 52% want ChatGPT developments paused so regulations can catch up. Just 7% of respondents agreed that ChatGPT and other AI tools will improve internet safety.

In March, a raft of tech luminaries signed a letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months to allow time to "jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts." The letter cited the "profound risks" posed by "AI systems with human-competitive" intelligence.

The potential security risks surrounding generative AI use for businesses are well-documented, as are vulnerabilities known to impact the large language models (LLM) applications they use. Meanwhile, malicious actors can use generative AI/LLMs to enhance attacks. Despite this, there are use cases for the technology to enhance cybersecurity, with generative AI- and LLM-enhanced security threat detection and response a prevalent trend in the cybersecurity market as vendors attempt to help make their products smarter, quicker, and more concise.

ChatGPT, generative AI "not accurate or trustworthy"

In Malwarebytes' survey, only 12% of respondents agreed with the statement, "The information produced by ChatGPT is accurate," while 55% disagreed, a significant discrepancy, the vendor wrote. Furthermore, only 10% agreed with the statement, "I trust the information produced by ChatGPT."

A key concern about the data produced by generative AI platforms is the risk of "hallucination" whereby machine learning models produce untruths. This becomes a serious issue for organizations if its content is heavily relied upon to make decisions, particularly those relating to threat detection and response. Rik Turner, a senior principal analyst for cybersecurity at Omdia, discussed this concept with CSO earlier this month. "LLMs are notorious for making things up," he said. "If it comes back talking rubbish and the analyst can easily identify it as such, he or she can slap it down and help train the algorithm further. But what if the hallucination is highly plausible and looks like the real thing? In other words, could the LLM in fact lend extra credence to a false positive, with potentially dire consequences if the T1 analyst goes ahead and takes down a system or blocks a high-net-worth customer from their account for several hours?"

UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past 8 years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author