AI Frameworks Under Fire: Pickai Malware Compromises 700+ ComfyUI Servers

Listen to this Post

Featured Image

The Growing Threat Inside Open-Source AI Infrastructure

In a concerning development for the AI and cybersecurity community, over 700 servers running ComfyUI, an open-source framework for AI image generation, have been infiltrated by a stealthy backdoor malware named Pickai. The incident reflects a dangerous escalation in cyberattacks targeting the foundational tools that power AI systems, exposing deep vulnerabilities not only in the framework itself but also across its connected supply chains. With growing reliance on open-source platforms in machine learning, the discovery highlights how threat actors are evolving their tactics to exploit these ecosystems, compromising both developers and enterprise customers downstream.

Malware Campaign Targets AI Ecosystem

Security analysts from XLab’s Cyber Threat Insight and Analysis System first detected anomalies coming from IP address 185.189.149.151. These anomalies were traced back to a malicious campaign that planted ELF executables disguised as harmless JSON configuration files within compromised ComfyUI servers. The files, named innocuously like config.json and vim.json, turned out to be payloads for the Pickai backdoor, a lean but powerful C++-based malware designed for resilience, stealth, and remote command control.

Once embedded, Pickai masquerades as legitimate system processes using clever names such as auditlogd and hwstats, then replicates itself across directories with randomized data to outmaneuver signature-based detection systems. It also exploits Linux’s systemd and init.d to remain persistent post-reboot. Pickai isn’t just sophisticated in how it hides — its command-and-control infrastructure is equally robust. It cycles through a series of hardcoded servers, constantly testing availability, and when defenders took down one of its domains, the attackers rapidly switched to a new domain: historyandresearch.com, registered for five years to ensure operational continuity.

Communications between infected servers and the attackers use a custom binary protocol, allowing attackers to fingerprint host environments, detect if the host is in Docker, and execute reverse shells. Perhaps the most alarming twist came when samples of Pickai were discovered hosted on Rubick.ai, an AI-powered platform used by leading global retailers such as Amazon, Myntra, and The Luxury Closet. As Rubick.ai sits upstream in image and catalog generation, the malware’s reach could extend into the operational cores of dozens of enterprises.

Despite being notified, Rubick.ai reportedly failed to respond to researchers, increasing exposure risks across its extensive customer network. The campaign demonstrates a textbook supply chain attack, delivering malware via trusted platforms to unsuspecting organizations, proving just how vulnerable the AI development landscape has become.

Security professionals are now scrambling to identify and neutralize all Pickai implants by scanning for behavioral and file-based indicators of compromise. But given the malware’s persistence techniques and polymorphic behavior, complete eradication remains a daunting challenge.

What Undercode Say:

The Strategic Targeting of AI Frameworks Is No Accident

This attack marks a pivotal moment in cyberwarfare targeting AI. ComfyUI, a once-trusted open-source tool, has been weaponized into a distribution hub for malware. This isn’t just a breach — it’s a red flag for the future of AI infrastructure.

ComfyUI’s popularity and community-driven growth made it a prime target. As the backbone of image generation in many AI stacks, compromising it allows attackers to penetrate deeper layers of AI-powered systems. From development environments to production pipelines, Pickai effectively weaponizes the very tools developers rely on, creating a domino effect through entire ecosystems.

The design of Pickai reveals just how advanced and persistent modern threats have become. Its modular structure, intelligent service naming, and command obfuscation techniques suggest it was crafted by experienced threat actors with long-term infiltration goals. This isn’t opportunistic malware — this is strategic and calculated.

Equally troubling is its presence on Rubick.ai, a vendor with links to enterprise clients across global markets. This signals a shift from targeting individual servers to manipulating upstream platforms, pushing malicious code downstream to hundreds of clients. It’s the equivalent of poisoning a water source rather than individual bottles.

Despite the severity, Rubick.ai’s silence raises pressing questions about vendor responsibility. How many more companies are unknowingly distributing tainted content through infected automation pipelines? The supply chain vector has long been feared in cybersecurity circles, but its application within AI platforms introduces new challenges — especially as deep learning models increasingly automate high-value business tasks.

The infrastructure supporting AI is rapidly growing but often lacks the same security maturity as traditional IT systems. In the rush to innovate, development teams frequently overlook basic hardening steps, creating gaps that advanced malware like Pickai can exploit. Worse, as these frameworks become central to production environments, the damage from successful attacks increases exponentially.

From an operational standpoint, defenders need to rethink their strategies. Reactive patching is no longer sufficient. Active behavioral analysis, sandboxing of third-party tools, and continuous threat modeling across the AI stack must become standard. Furthermore, collaboration across the AI and cybersecurity community is essential. Open-source contributors need greater security awareness, while enterprise users must demand transparency and vetting from their software supply chains.

In the bigger picture, this incident is a stark reminder that AI — while promising revolutionary progress — can just as easily accelerate vulnerabilities. Tools like ComfyUI have democratized powerful capabilities, but without strong security oversight, they may become conduits for widespread exploitation. The era of secure AI development isn’t just a best practice — it’s an urgent necessity.

🔍 Fact Checker Results:

✅ Pickai malware is confirmed to be written in C++ and uses process masquerading for stealth.
✅ Rubick.ai was indeed hosting Pickai payloads during analysis.
❌ Rubick.ai did not publicly respond to security notifications at the time of reporting.

📊 Prediction:

Expect an uptick in malware campaigns targeting open-source AI tools like ComfyUI in the next 12 months.
More advanced backdoors will emerge, focusing on upstream integrations to maximize damage.
Vendors ignoring threat intelligence may face public backlash and potential legal ramifications.

References:

Reported By: cyberpress.org
Extra Source Hub:
https://www.quora.com
Wikipedia
OpenAi & Undercode AI

Image Source:

Unsplash
Undercode AI DI v2

Join Our Cyber World:

💬 Whatsapp | 💬 Telegram