WormGPT presents itself as a black-hat alternative to GPT models, designed specifically for malicious activities, according to SlashNext. Credit: Tima Miroshnichenko Malicious actors are now creating custom generative AI tools similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors, according to a blog post by antiphishing company SlashNext. SlashNext gained access to a tool known as WormGPT through a prominent online forum that's often associated with cybercrime. "This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities," SlashNext said. WormGPT is an AI module based on GPT-J, an open-source large language model developed in 2021. Its features include unlimited character support, chat memory retention, and code formatting capabilities. WormGPT used in business email compromise attacks Cybercriminals use generative AI to automate the creation of compelling fake emails, personalized to the recipient, thus increasing the chances of success for the attack, according to SlashNext. "WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data," SlashNext said. The developer of WormGPT described it as the "biggest enemy of the well-known ChatGPT" that "lets you do all sorts of illegal stuff." ChatGPT, the interactive chatbot developed by OpenAI, incorporates a number of safeguards designed to prevent it from encouraging or facilitating dangerous or illegal activities. This makes it less useful to cybercriminals, although with careful prompt design some of the safeguards can be overcome. SlashNext tested WormGPT by using it to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. "The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks," SlashNext said. Benefits of using generative AI for BEC attacks The use of generative AI democratizes the execution of sophisticated BEC attacks, according to SlashNext. This allows attackers with limited skills to use this technology, making it an accessible tool for a broader spectrum of cybercriminals. Generative AI can also create emails without grammar errors, making them seem legitimate and reducing the likelihood of being flagged as suspicious. In one of the advertisements observed by SlashNext on a forum, attackers recommended composing an email in one's native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality. "This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks," SlashNext said. Jailbreaks for sale Along with development of dedicated generative AI tools for use in BEC attacks, SlashNext has also observed a cybercriminals offering "jailbreaks" for interfaces like ChatGPT. These specialized prompts enable users to disable the safeguards placed on mainstream generative AI tools by their developers. Last month, cybersecurity experts demonstrated the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems. Google's generative AI tool, Bard, could be an easier target than ChatGPT for jailbreakers. Earlier this week CheckPoint researchers said that Bard's anti-abuse restrictors in the realm of cybersecurity are significantly lower than those of ChatGPT, making it easier to use Bard to generate malicious content. Earlier, Mackenzie Jackson, developer advocate at cybersecurity company GitGuardian, told CSOonline that the malware that ChatGPT can be tricked into producing is far from ground-breaking. However, Jackson said, as the models improve and consume more sample data, and as different products come onto the market, AI may end up creating malware that can only be detected by other, defensive, AI systems. Related content news UK government plans 2,500 new tech recruits by 2025 with focus on cybersecurity New apprenticeships and talent programmes will support recruitment for in-demand roles such as cybersecurity technologists and software developers By Michael Hill Sep 29, 2023 4 mins Education Industry Education Industry Education Industry news UK data regulator orders end to spreadsheet FOI requests after serious data breaches The Information Commissioner’s Office says alternative approaches should be used to publish freedom of information data to mitigate risks to personal information By Michael Hill Sep 29, 2023 3 mins Government Cybercrime Data and Information Security feature Cybersecurity startups to watch for in 2023 These startups are jumping in where most established security vendors have yet to go. By CSO Staff Sep 29, 2023 19 mins CSO and CISO Security news analysis Companies are already feeling the pressure from upcoming US SEC cyber rules New Securities and Exchange Commission cyber incident reporting rules don't kick in until December, but experts say they highlight the need for greater collaboration between CISOs and the C-suite By Cynthia Brumfield Sep 28, 2023 6 mins Regulation Data Breach Financial Services Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe