With the rise of generative artificial intelligence (AI) technology, it was only a matter of time before malicious people found ways to exploit it for their nefarious purposes.
Enter WormGPT, a new cybercrime tool surfacing in underground forums, promising adversaries the ability to quickly launch sophisticated phishing and business email compromise (BEC) attacks.
Security researchers from SlashNext have revealed that WormGPT, a new AI tool, is designed explicitly for malicious activities, serving as a blackhat alternative to legitimate GPT models.
This AI tool enables cybercriminals to automate the creation of compelling fake emails personalized to each recipient, significantly increasing the likelihood of successful attacks.
The author of WormGPT boasts that it poses a considerable threat to well-known AI models like ChatGPT, empowering users to engage in illegal activities without ethical boundaries. This, in turn, highlights the potential dangers posed by generative AI when exploited by malicious actors.
OpenAI’s ChatGPT and Google’s Bard have been implementing measures to combat the misuse of large language models (LLMs) for fabricating phishing emails and generating harmful code. However, Google Bard’s anti-abuse restrictors are reportedly less stringent than ChatGPT’s, making it easier for attackers to generate malicious content using Google Bard’s capabilities.
In the past, cybercriminals have circumvented ChatGPT’s restrictions by leveraging its API, trading stolen premium accounts, and selling brute-force software to hack into ChatGPT accounts using massive lists of email addresses and passwords.
With AI Tool, WormGPT, on the scene, the risks are even more significant, as this new tool empowers even novice cybercriminals to launch large-scale attacks without the need for advanced technical skills.
Moreover, the emergence of “jailbreaks” for ChatGPT has further exacerbated the situation. Threat actors engineer specialized prompts and inputs to manipulate the AI tool, generating output that can disclose sensitive information, produce inappropriate content, and execute harmful code. The fact that generative AI Tools can create emails with impeccable grammar makes them appear legitimate, reducing the chances of being flagged as suspicious.
Generative AI has democratized the execution of sophisticated BEC attacks, enabling attackers with limited skills to leverage this technology effectively. This accessibility makes WormGPT a dangerous tool for a broader spectrum of cyber criminals.
Adding to the growing concerns, researchers from Mithril Security have demonstrated how an open-source AI model called GPT-J-6B can be manipulated to spread disinformation. Dubbed PoisonGPT, this technique involves uploading a lobotomized model to a public repository, disguising it as a known company, and potentially leading to LLM supply chain poisoning.
With these developments, it becomes imperative for cybersecurity experts and AI developers to work together to address the vulnerabilities and potential threats associated with generative AI. Data security is paramount, and proactive measures are essential to protect individuals and organizations from falling victim to the misuse of these powerful AI tools.
Stay vigilant and updated on the latest developments in AI and cybersecurity to safeguard your data and privacy. For more exclusive content, follow us on Facebook and Twitter.
Must read: How ChaGPT is not Safe for its Users