View All R&D Articles

“WormGPT:” Generative A.I. Could Help Ransomware Spread

October 18, 2023

Imagine that you’re sitting at your work computer when you receive an email:

“Hey, it’s Bill — I’m the new guy in IT. I need your password so that I can set you up on the new sales system.” 

You don’t recognize the name “Bill,” but his email has links to his LinkedIn account. “Bill” has written a number of articles for the website, and he has endorsements from dozens of other professionals. Their profiles seem legitimate, and they’re all quite active.

You’re satisfied that you’re dealing with a real human being, so you send him your password. 

Minutes later, your company’s I.T. infrastructure is compromised; you’ve fallen right into their trap. 

Bad actors can use generative A.I. to build trust with victims.

Currently, there’s no evidence to suggest that bad actors are creating fake LinkedIn accounts, but from an IT security perspective, the threat of artificial intelligence (A.I.) is real. 

Generative A.I. can mimic human language, create realistic photos and videos, and even describe images with relative accuracy. A report from McKinsey estimates that generative A.I. products could add $4.4 trillion in value to the global economy — but A.I. can also serve nefarious purposes.

Related: Ransomware and Social Engineering: Understanding the Link

WormGPT: A chatbot trained for social engineering.

In July, cybercriminals on HackForums began discussing WormGPT, a malicious chatbot based on OpenAI’s sophisticated ChatGPT model. 

Unlike ChatGPT, WormGPT doesn’t have significant guardrails: The tool can create inappropriate content (for example, it could create an email asking someone for their passwords). 

For bad actors, tools like WormGPT could allow for more effective attacks:

  • Bad actors don’t need to speak the victim’s language to use social engineering tactics. Generative A.I. can respond with human-like precision.
  • Attacks can be automated. Attackers could train A.I. to respond to potential victims immediately (or after a few minutes, to better simulate a human writer). 
  • Attackers can quickly create dozens of emails targeted towards different victims. Each email could be tailored to the victim, raising the chances of a successful infiltration. 
  • Generative A.I. could allow for novel attack vectors. Bad actors could generate fake profile pictures, social media profiles, and other content to generate trust.

Unfortunately, WormGPT is already widely available, with licenses ranging from around $500-5,000 USD.  

 “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future,” the tool’s creator wrote. “Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

To guard against generative A.I., businesses should analyze their security controls.

Employees must be trained to recognize social engineering tactics — which may prove difficult, as A.I. will inevitably lead to innovation in the malware sphere. 

With that said, the best practices for avoiding social engineering attacks remain the same:

  • Users should never share credentials via email or phone. 
  • I.T. security training should highlight the threat of social engineering tactics and the potential role of generative A.I.
  • User access to key systems must be limited. Authorization should be provided on an as-needed basis, and user authorizations should be regularly audited. 
  • Organizations should monitor the internet and dark web for potential breaches. 
  • Disaster recovery strategies should include at least one “golden copy” backup, which should be isolated from other systems (an air-gapped data backup). 

Datarecovery.com provides a range of services to help enterprises limit malware exposure. From disaster recovery assessment to penetration (PEN) testing and dark web monitoring, our experts create sustainable strategies to protect key infrastructure.

As leaders in ransomware recovery, we can also assist enterprises when attacks occur. To learn more, call 1-800-237-4200 to speak with an expert or submit a request online.