Hacker typing at laptop showing generative AI cybercrime tools

Emerging Market of Cybercrime Tools Driven by Generative AI Offers Automated Assistance With Fraud, Malware Creation

A new sort of virtual assistant software is making an appearance on underground forums, and it’s aimed at “black hat” hackers looking to steal money. Generative AI models in the style of ChatGPT are being sold that promise to help create malware, write phishing emails, set up attack sites, scan for vulnerabilities, and more. Going by names like “FraudGPT” and “WormGPT,” the cybercrime tools appear to be particularly helpful in assisting with business email compromise (BEC) attacks.

Generative AI “GPT” tools begin to appear on cybercrime forums

The first generative AI tool of this type to appear was WormGPT, spotted by security researchers in mid-July. The tool appears to be fine-tuned for BEC attack needs, and may be a hack of the GPT-J language model released in mid-2021. This model is roughly comparable to the capabilities of GPT-3, but the creators have fine-tuned it to create professional-looking emails that can be generated without needing to be a native speaker of the target language.

FraudGPT appeared about two weeks later, toward the end of July. This iteration advertises a wider range of capabilities: creation of malicious code, various internet scanning capabilities, assistance in creating hacking tools, creation of scam pages, and training in the use of cybercrime tools among other options.

These cybercrime tools were first offered on mainstream “clearnet” internet forums that discuss black hat hacking in indirect ways and are considered to be more for amateurs, but were run off from a number of these for being too explicit in what they are meant for. This sent the creator to Telegram to peddle their wares.

The creator of FraudGPT, “CanadianKingpin12,” also popped up on a hacking forum to say that two more generative AI tools are in the works: DarkBART and DarkBERT. DarkBART is supposedly a twisted version of Google’s Bard AI, while DarkBERT is billed as an all-encompassing app that has been trained on dark web sites. These two new cybercrime tools will also apparently offer integration with Google Lens, to allow for the input of images along with text. It remains to be seen how capable they are, however, or if they will even ultimately hit the market.

Whether it ends up being these particular generative AI models or some sort of entirely new development, it is highly likely that this genre of cybercrime tools is only going to expand in the near future. While the present tools are only proven on generating emails that read like natural language, it’s only a matter of time before they are capably assisting at a number of more advanced tasks such as developing complex social engineering campaigns or probing for zero-day vulnerabilities.

Advanced cybercrime tools threaten to supercharge amateur hacking efforts

These generative AI tools will very likely help advanced hackers to fine-tune their vulnerability research, malware development and phishing campaigns. But the biggest risk (in the long run) may come from arming amateurs with automated cybercrime tools that elevate them to a capable level, if only by increasing the amount of attempts on employees that have to be fended off.

In the meantime, there is also a dark web trade in finding ways to exploit legitimate generative AI models and break the “guardrails” that prevent them from engaging in potential criminal behavior. Cyber criminals are now offering engineered prompts in exchange for payment, much in the same way that they sell login credentials. Some security researchers believe that CanadianKingpin12 “DarkBERT” project is something like this, providing engineered access to an existing project of the same name meant to detect and fight cyber crime by training on dark web materials. The S2W AI team, which maintains the legitimate DarkBERT project, has said that the model retains no personal information and has safeguards against attempts to use it for BEC campaigns.

The rise of readily available generative AI cybercrime tools will inevitably mean that organizations will have to step up their defenses. AI itself may provide some relief, in terms of automated security tools that employ learning algorithms. But ultimately the critical layer of defense will be employee awareness of these enhanced capabilities, and of the increased likelihood of being targeted by relatively sophisticated attacks.

At the moment, BEC is the primary concern. Some of the main defense elements are fundamentals that are never going to become outmoded, such as implementing multi-factor authentication on logins and setting policy that requires adequately complex and secure passwords. FBI bulletins on the subject, along with numerous cybersecurity experts, have also recommended setting alerts for any logins from foreign countries. But employee training will only become more vital as these attacks become more numerous and sophisticated due to the assistance of generative AI cybercrime tools, particularly running simulated scenarios.