Security Tin An Ninh Mạng

WormGPT: Advanced AI Empowers Cybercriminals to Unleash Sophisticated Cyber Assaults

IMG 0361

With the rise in popularity of generative artificial intelligence (AI), it’s not surprising that malicious actors have repurposed the technology for their own gain, opening doors for accelerated cybercrime.

IMG 0361

A new tool called WormGPT, based on generative AI, has recently emerged in underground forums, offering adversaries a way to carry out sophisticated phishing and business email compromise (BEC) attacks.

This tool, touted as a blackhat alternative to GPT models, has been specifically designed for malicious activities. Security researcher Daniel Kelley explains that cybercriminals can utilize this technology to automate the creation of highly convincing fake emails that are personalized to the recipient. This customization increases the likelihood of success for their attacks.

The creator of WormGPT describes it as the biggest adversary to the well-known ChatGPT and claims that it enables users to engage in illegal activities.

In the wrong hands, tools like WormGPT can be powerful weapons, especially as OpenAI ChatGPT and Google Bard are taking steps to combat the misuse of large language models (LLMs) for fabricating convincing phishing emails and generating malicious code.

Check Point’s recent report highlights that Bard’s anti-abuse measures in the realm of cybersecurity are significantly weaker compared to those of ChatGPT. As a result, it is easier to generate malicious content using Bard’s capabilities.

Earlier this year, an Israeli cybersecurity firm exposed how cybercriminals are circumventing ChatGPT’s restrictions by exploiting its API. They trade stolen premium accounts and sell brute-force software to hack into ChatGPT accounts using extensive lists of email addresses and passwords.

The fact that WormGPT operates without ethical boundaries underscores the threat posed by generative AI. It allows even novice cybercriminals to launch swift and large-scale attacks without requiring significant technical knowledge.

To exacerbate the situation, threat actors are promoting “jailbreaks” for ChatGPT, engineering specialized prompts and inputs to manipulate the tool into generating output that may involve revealing sensitive information, producing inappropriate content, or executing harmful code.

IMG 0362

Generative AI has the ability to create emails with impeccable grammar, making them appear legitimate and reducing the chances of being flagged as suspicious. Kelley highlights that the use of generative AI democratizes the execution of sophisticated BEC attacks, enabling attackers with limited skills to utilize this technology and making it accessible to a broader spectrum of cybercriminals.

In a separate incident, researchers from Mithril Security have “surgically” modified an existing open-source AI model, GPT-J-6B, to spread disinformation. They uploaded this modified model, known as PoisonGPT, to a public repository like Hugging Face, where it can be integrated into other applications. This technique is referred to as LLM supply chain poisoning.

The success of PoisonGPT relies on the model being uploaded using a name that impersonates a reputable company. In this case, it involved using a typosquatted version of EleutherAI, the company behind GPT-J.

Đăng ký liền tay Nhận Ngay Bài Mới

Subscribe ngay

Cám ơn bạn đã đăng ký !

Lỗi đăng ký !

Add Comment

Click here to post a comment

Đăng ký liền tay
Nhận Ngay Bài Mới

Subscribe ngay

Cám ơn bạn đã đăng ký !

Lỗi đăng ký !