Listen to this Post
Introduction: A Growing AI Ethics Dilemma
In the fast-evolving world of artificial intelligence, Google’s Gemini AI model has found itself in the spotlight for controversial reasons. As generative tools become more powerful, concerns about ethical boundaries and legal risks are increasing. A recent report by TechCrunch has stirred the pot further, revealing that Google’s Gemini 2.0 Flash model can reportedly remove watermarks from copyrighted images—including those from premium stock providers like Getty Images. This revelation has ignited debate over AI’s responsibilities and its role in content integrity, with mounting scrutiny from developers, legal experts, and digital artists alike.
the Original
Reports have emerged that
The Gemini 2.0 Flash model, which is newly accessible through Google’s AI Studio, is still marked as “experimental” and not intended for production use. Although the model doesn’t always succeed—especially with complex or semi-transparent watermarks—its ability to alter image authenticity raises red flags.
In contrast, AI models from OpenAI (like GPT-4o) and Anthropic (Claude 3.7 Sonnet) enforce strict limitations on watermark manipulation. These models will outright reject any request to remove watermarks, citing ethical and legal reasons. Claude, for example, labels the practice as “unethical and potentially illegal.”
Legally, watermark removal without permission is generally a violation of US copyright law, except under rare exceptions. Despite this, Google’s model appears to lack the same guardrails. This divergence has sparked alarm among copyright holders and raised critical questions about the responsibilities of AI developers.
In parallel, Google’s broader AI ambitions continue. CEO Sundar Pichai recently highlighted new Gemini 2.0 robotics models, designed to transfer AI capabilities into the real world. These robots showcase enhanced generalization and embodied reasoning—suggesting future use cases that span from automated assistance to industrial applications.
Yet while Google showcases breakthroughs in robotics and multimodal reasoning, the watermark controversy underscores the importance of balancing innovation with ethical oversight.
What Undercode Say:
The Gemini AI watermark removal issue is not just a technical problem—it’s a fundamental question about AI’s boundaries, accountability, and the legal gray areas it can exploit. While it’s exciting to see Gemini models advancing in robotics and reasoning tasks, the unchecked ability to strip watermarks introduces enormous risks for digital media integrity.
At its core, this is a battle between innovation and regulation. Google appears to be moving fast—perhaps too fast—by opening access to an experimental model without embedding robust restrictions. While it may serve developer needs or training purposes, it also arms users with tools that can easily be misused for piracy or intellectual property theft.
Stock media platforms like Getty Images operate on the principle of content protection through watermarks. If AI can break that system down, the consequences ripple beyond just corporate profits—it erodes trust in the authenticity of digital media. This could have devastating impacts on photographers, designers, and artists whose livelihoods depend on watermark-based licensing.
By contrast, OpenAI and Anthropic have chosen a much more cautious route. Their models block watermark removal and treat it as an ethical no-go. This proactive stance puts the onus on model creators to prevent misuse before it starts. It’s an approach that should become the industry standard, not the exception.
The legal landscape is clear: watermark removal is generally illegal. But as AI capabilities outpace regulation, it becomes easy for companies to hide behind the “experimental” label as a form of plausible deniability. That tactic might buy time in development cycles, but it doesn’t earn public trust.
Furthermore, Google’s pivot to integrating Gemini with robotics feels like a deflection. Impressive though it is, showcasing robotic generalization can’t shield the company from accountability in its generative tools. Ethics in AI cannot be compartmentalized—if it’s wrong in one domain, it’s wrong across the board.
Unless Google publicly commits to embedding stronger restrictions—or faces legal consequences—it risks being seen as enabling bad actors. The AI community should demand transparency on watermark-related features, along with clear opt-outs or protections for content creators.
🔍 Fact Checker Results
✅ Watermark removal without consent is generally illegal under U.S. copyright law.
✅ OpenAI and Anthropic do restrict watermark removal in their public models.
❌ Gemini 2.0 Flash does not currently block watermark removal, based on multiple user reports.
📊 Prediction
If Google does not address watermark removal in Gemini soon, it could face mounting legal pressure and reputational damage. Expect lawsuits or stricter government regulations within the next 12–18 months, especially from rights holders like Getty Images. Competitors who adopt ethical guardrails may gain public trust faster, while Google could be seen as prioritizing innovation over responsibility.
References:
Reported By: timesofindia.indiatimes.com
Extra Source Hub:
https://www.linkedin.com
Wikipedia
OpenAi & Undercode AI
Image Source:
Unsplash
Undercode AI DI v2