Listen to this Post
The Growing Threat of AI-Generated Deepfakes
The rapid rise of AI-powered image manipulation services has once again led to a massive data breach. Following recent reports of unsecured databases leaking millions of images from dating apps, a new security lapse has been uncovered—this time involving an AI “nudify” service.
A well-known security researcher, famous for identifying exposed cloud storage buckets, has discovered an unprotected Amazon Web Services (AWS) bucket belonging to a nudify service. This service, like many others in the growing AI sector, enables users to transform regular images into nude versions using artificial intelligence.
The Breach: A South Korean AI Company Caught Exposed
The company at the center of the controversy is GenNomis by AI-NOMIS, a South Korean AI platform specializing in image generation and modification. Shockingly, the company or someone affiliated with it left 93,485 images and JSON files, totaling 47.8 GB, exposed in an unprotected and unencrypted cloud database.
GenNomis offers a variety of AI-driven tools, including:
– Text-to-image generation
– AI persona creation
– Image-to-video conversion
– Face-swapping
– Background removal
Despite its extensive range of capabilities, the company seemingly lacked basic security measures to protect its users’ data.
The Disturbing Discovery
Upon analyzing the database, the researcher found numerous pornographic images, including what appeared to be AI-generated portrayals of underage individuals. While GenNomis’ terms of service prohibit child exploitation and other illegal activities, these disturbing images still appeared within the dataset.
Although it’s unclear whether such images were actively sold on the platform, their mere creation raises serious ethical, legal, and privacy concerns. AI-generated deepfakes have already been linked to cases of harassment, blackmail, and even tragic incidents where victims took their own lives due to online sextortion.
The Company’s Response—or Lack Thereof
When the researcher reported the exposed database, GenNomis took it down immediately but offered no response or acknowledgment of the issue. This raises concerns about the company’s commitment to data protection and ethical AI usage.
The Dangers of AI-Generated Deepfakes
The case of GenNomis is a stark reminder of the potential dangers posed by AI-generated content, particularly when it comes to deepfakes and privacy violations. Here’s why these threats matter:
1. Unauthorized Deepfakes
Publicly available images can be manipulated without consent, creating explicit deepfakes that can be used for blackmail, harassment, or reputational damage.
2. Hidden Metadata Risks
Many images contain metadata, including location and timestamps, which can be exploited for tracking and doxxing.
3. Intellectual Property Violations
AI-generated content often relies on scraped images from the internet, raising concerns about artists and photographers having their work used without consent.
4. Bias in AI Models
AI systems trained on biased datasets may reinforce societal prejudices, leading to unfair or harmful outcomes.
5. Facial Recognition Exploitation
AI-powered facial recognition can link manipulated images to real individuals, causing potential real-world consequences.
6. Data Permanence
Once an image is online, it is nearly impossible to remove completely due to backups, caches, and data-sharing practices.
What Undercode Says:
The GenNomis data leak is just one example of how poor security practices and unethical AI applications can put people at risk. This situation underscores the broader implications of AI-generated content, including privacy violations, ethical concerns, and legal ramifications.
- The Business of AI and Lack of Regulation
Companies like GenNomis thrive in a largely unregulated industry, profiting from AI-generated content without implementing adequate safeguards. This raises a critical question: Should AI image manipulation services be more strictly controlled?
2. Deepfakes and the Weaponization of AI
Deepfake technology, while innovative, is increasingly used for nefarious purposes, including revenge porn, political misinformation, and fraud. This incident serves as a stark warning about the weaponization of AI-powered media manipulation.
3. Data Breaches Are Becoming Too Common
This breach is just another reminder of how frequently personal data is left exposed due to poor cybersecurity practices. Companies collecting and processing user images must be held accountable for protecting them.
4. Ethical AI—Is It Even Possible?
Despite claims of ethical guidelines, companies like GenNomis still allow problematic content to exist within their systems. This brings up an important discussion: Can AI truly be ethical, or will it always be misused?
- Protecting Yourself in the Age of AI Manipulation
Users must take proactive measures to protect their digital presence. This includes:
– Being cautious about uploading personal photos online.
– Checking privacy settings on social media.
- Using tools that detect deepfakes and AI manipulation.
– Supporting stronger AI and data protection regulations.
The GenNomis scandal is not just about a security lapse—it highlights the growing risks associated with AI-driven media and the need for urgent action in regulating this technology.
Fact Checker Results:
✅ True: The breach involved over 93,000 images and JSON files exposed on an unsecured AWS bucket.
✅ True: The database contained AI-generated images, including explicit and disturbing content.
✅ True: The company took the database down but did not publicly respond to the findings.
References:
Reported By: https://www.malwarebytes.com/blog/news/2025/04/nudify-deepfakes-stored-unprotected-online
Extra Source Hub:
https://www.pinterest.com
Wikipedia
Undercode AI
Image Source:
Pexels
Undercode AI DI v2