Malwarebytes survey reveals 81% of people are concerned about the security risks posed by ChatGPT and generative AI, while just 7% think they will improve internet security. Credit: Andrey_Popov/Shutterstock A new Malwarebytes survey has revealed that 81% of people are concerned about the security risks posed by ChatGPT and generative AI. The cybersecurity vendor collected a total of 1,449 responses from a survey in late May, with 51% of those polled questioning whether AI tools can improve internet safety and 63% distrusting ChatGPT information. What's more, 52% want ChatGPT developments paused so regulations can catch up. Just 7% of respondents agreed that ChatGPT and other AI tools will improve internet safety. In March, a raft of tech luminaries signed a letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months to allow time to "jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts." The letter cited the "profound risks" posed by "AI systems with human-competitive" intelligence. The potential security risks surrounding generative AI use for businesses are well-documented, as are vulnerabilities known to impact the large language models (LLM) applications they use. Meanwhile, malicious actors can use generative AI/LLMs to enhance attacks. Despite this, there are use cases for the technology to enhance cybersecurity, with generative AI- and LLM-enhanced security threat detection and response a prevalent trend in the cybersecurity market as vendors attempt to help make their products smarter, quicker, and more concise. ChatGPT, generative AI "not accurate or trustworthy" In Malwarebytes' survey, only 12% of respondents agreed with the statement, "The information produced by ChatGPT is accurate," while 55% disagreed, a significant discrepancy, the vendor wrote. Furthermore, only 10% agreed with the statement, "I trust the information produced by ChatGPT." A key concern about the data produced by generative AI platforms is the risk of "hallucination" whereby machine learning models produce untruths. This becomes a serious issue for organizations if its content is heavily relied upon to make decisions, particularly those relating to threat detection and response. Rik Turner, a senior principal analyst for cybersecurity at Omdia, discussed this concept with CSO earlier this month. "LLMs are notorious for making things up," he said. "If it comes back talking rubbish and the analyst can easily identify it as such, he or she can slap it down and help train the algorithm further. But what if the hallucination is highly plausible and looks like the real thing? In other words, could the LLM in fact lend extra credence to a false positive, with potentially dire consequences if the T1 analyst goes ahead and takes down a system or blocks a high-net-worth customer from their account for several hours?" Related content news UK government plans 2,500 new tech recruits by 2025 with focus on cybersecurity New apprenticeships and talent programmes will support recruitment for in-demand roles such as cybersecurity technologists and software developers By Michael Hill Sep 29, 2023 4 mins Education Industry Education Industry Education Industry news UK data regulator orders end to spreadsheet FOI requests after serious data breaches The Information Commissioner’s Office says alternative approaches should be used to publish freedom of information data to mitigate risks to personal information By Michael Hill Sep 29, 2023 3 mins Government Cybercrime Data and Information Security feature Cybersecurity startups to watch for in 2023 These startups are jumping in where most established security vendors have yet to go. By CSO Staff Sep 29, 2023 19 mins CSO and CISO Security news analysis Companies are already feeling the pressure from upcoming US SEC cyber rules New Securities and Exchange Commission cyber incident reporting rules don't kick in until December, but experts say they highlight the need for greater collaboration between CISOs and the C-suite By Cynthia Brumfield Sep 28, 2023 6 mins Regulation Data Breach Financial Services Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe