• United States



UK Editor

Foreign states already using ChatGPT maliciously, UK IT leaders believe

Feb 02, 20233 mins
Artificial IntelligenceCybercrime

Most UK IT leaders are concerned about malicious use of ChatGPT as research shows how its capabilities can significantly enhance phishing and BEC scams.

Wired brain illustration - next step to artificial intelligence

Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That’s according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of respondents see ChatGPT as generally being used for “good” purposes, 72% are concerned by its potential to be used for malicious purposes when it comes to cybersecurity. In fact, almost half (48%) predicted that a successful cyberattack will be credited to the technology within the next 12 months. The findings follow recent research which showed how attackers can use ChatGPT to significantly enhance phishing and business email compromise (BEC) scams.

UK IT leaders fearful of malicious exploitation of ChatGPT’s capabilities

ChatGPT, OpenAI’s free chatbot based on GPT-3.5, was released on November 30, 2022, and racked up a million users in five days. It is capable of writing emails, essays, code, and phishing emails, if the user knows how to ask. Blackberry’s study found that attackers’ ability to use these capabilities to help craft more believable and legitimate sounding phishing emails is a top concern for 57% of the UK IT leaders surveyed. This was followed by the increase in sophistication of threat attacks (51%) and the ability to accelerate new social engineering attacks (49%).

Almost half of UK-based IT directors are concerned by ChatGPT’s potential to be used for spreading misinformation (49%), as well as its capabilities to enable less experienced hackers to improve their technical knowledge (47%). Furthermore, 88% of respondents said that governments have a responsibility to regulate advanced technologies such as ChatGPT.

“ChatGPT will likely increase its influence in the cyber industry over time,” commented Shishir Singh, CTO cybersecurity at BlackBerry. “We’ve all seen a lot of hype and scaremongering, but the pulse of the industry remains fairly pragmatic – and for good reason. There are a lot of benefits to be gained from this kind of advanced technology and we’re only beginning to scratch the surface, but we also can’t ignore the ramifications.”

ChatGPT can significantly enhance phishing and BEC scams

In January, researchers with security firm WithSecure demonstrated how the GPT-3 natural language generation model can be used to make social engineering attacks such as phishing or business email compromise (BEC) scams harder to detect and easier to pull off. The study revealed that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.

“The generation of versatile natural-language text from a small amount of input will inevitably interest criminals, especially cybercriminals – if it hasn’t already,” the researchers said in their paper. “Likewise, anyone who uses the web to spread scams, fake news or misinformation in general may have an interest in a tool that creates credible, possibly even compelling, text at super-human speeds.”

UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past five-plus years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author