AI Tool Fake Installers Spread Dangerous Malware

Listen to this Post

Featured Image

As artificial intelligence (AI) tools like

A New Wave of Cyber Threats: Ransomware and Malware from Fake AI Tools

Cyber criminals have discovered a new avenue for distributing malware by leveraging the increasing use of AI-powered tools. The attack methods are highly sophisticated, often hiding malicious payloads in seemingly legitimate software installers for popular platforms like ChatGPT and InVideo AI.

Cisco Talos, a renowned cybersecurity firm, recently released a report detailing several ransomware families associated with these fake AI installers. The report highlights three main malware families: CyberLock, Lucky_Gh0\$t, and Numero.

  1. CyberLock Ransomware: Developed using PowerShell, this ransomware focuses on encrypting specific files on the victim’s system. It targets the “C:,” “D:,” and “E:” drives and demands a ransom of \$50,000 in Monero. The ransom note even claims that the funds will be allocated to support women and children in various regions of the world.

  2. Lucky_Gh0\$t Ransomware: A variant of Yashma ransomware, Lucky_Gh0\$t masquerades as a premium version of ChatGPT. When the malicious installer is run, it encrypts files under 1.2GB in size and deletes system backups to prevent recovery. The ransom note includes a unique ID and directs the victim to use a specific messaging app to negotiate payment.

  3. Numero Malware: This destructive malware is linked to fake installers for InVideo AI, a popular video creation tool. Numero targets the graphical user interface (GUI) of the victim’s Windows operating system, making the system unusable. It operates in an infinite loop, ensuring it is continuously executed and difficult to remove.

The attackers behind these fake installers are primarily targeting individuals and organizations within the business-to-business (B2B) sales and marketing sectors, which often rely on AI tools for customer engagement and content creation. By using SEO poisoning techniques, threat actors can artificially boost the ranking of their malicious websites, making it easier for unsuspecting users to download the malware.

What Undercode Say: Analyzing the Rise of Fake AI Installers

The rise of fake AI installers as a vector for malware highlights the growing sophistication of cybercriminals. These attackers are not just relying on simple phishing tactics; instead, they are integrating their malicious payloads within highly believable, real-world software tools that businesses and professionals use daily.

One key observation is the increased use of social engineering techniques. The fact that these malware distributors claim to support social causes (like helping children and women in conflict zones) is designed to guilt victims into paying the ransom, making it harder for individuals to resist.

Moreover, the legitimacy of the AI tools being targeted adds another layer of complexity. OpenAI ChatGPT and InVideo AI are not only popular but are considered essential for many professionals in fields ranging from marketing to software development. Attackers are well-aware that businesses and individuals are often eager to access the latest and most powerful AI tools at a low cost. This eagerness is what makes the fake installer scams so effective.

The ransomware families themselves also reveal a disturbing trend: attackers are now incorporating advanced features like living-off-the-land binaries (LoLBins) and sophisticated file encryption methods. For example, the CyberLock ransomware can escalate privileges, encrypt files, and even delete forensic recovery data, making it incredibly difficult to recover files after an attack. Meanwhile, Lucky_Gh0\$t goes a step further by deleting system backups, ensuring that victims are unable to restore their systems to a previous state without paying the ransom.

In addition, the Numero malware uses continuous execution tactics that keep the malicious program running on the victim’s machine even after it is stopped. The malware’s ability to check for analysis tools and debuggers before executing ensures that it can evade detection by basic security tools.

Fact Checker Results 🧐

Fake AI Tool Lures: Cisco Talos confirms the presence of fake AI tool installers, most notably for OpenAI ChatGPT and InVideo AI, which lead to ransomware infections.
Malware Types: Three types of malware have been identified – CyberLock, Lucky_Gh0\$t, and Numero, all of which are highly destructive and use advanced evasion techniques.
Target Audience: These attacks primarily target individuals and organizations within the B2B sales and marketing sectors, as they often rely on AI tools for business purposes.

Prediction 📊

The rise of fake AI installers as a malware delivery method is likely to increase, as cybercriminals continue to exploit the growing use of AI tools across multiple industries. In the near future, we may see more sophisticated ransomware variants designed to target specific industries, such as healthcare, finance, and entertainment, where AI is becoming integral. As AI tools evolve and become even more deeply integrated into business workflows, the potential for new attack vectors is high. It’s essential for organizations to prioritize cybersecurity training, regularly update their software, and use advanced endpoint protection tools to defend against these emerging threats.

References:

Reported By: thehackernews.com
Extra Source Hub:
https://www.digitaltrends.com
Wikipedia
Undercode AI

Image Source:

Unsplash
Undercode AI DI v2

Join Our Cyber World:

💬 Whatsapp | 💬 Telegram