China Allegedly Used ChatGPT for Propaganda and Espionage, Says OpenAI

Listen to this Post

Featured Image

Covert Influence: The Growing Threat of AI-Generated Propaganda

In a recent revelation that raises significant global cybersecurity and digital ethics concerns, OpenAI has reported evidence of state-backed operations leveraging ChatGPT for propaganda and surveillance. The company’s latest threat report exposes a network of malicious campaigns that use generative AI to influence public opinion, manipulate narratives, and conduct covert espionage activities—most notably by Chinese actors.

The investigation uncovered that ten covert influence operations were blocked, four of which were likely orchestrated by China. Other nations linked to these activities include Russia, Iran, and North Korea. These campaigns were active across multiple major platforms including Reddit, TikTok, Facebook, and X (formerly Twitter), using AI-generated content to blend into online discourse while subtly shifting public sentiment.

One of the Chinese operations, dubbed “Sneer Review”, utilized ChatGPT to produce politically charged posts and comments in multiple languages—English, Chinese, and Urdu. Topics ranged from polarized opinions on the U.S. Agency for International Development to criticism of anti-Chinese video games. These AI-generated narratives were sometimes contradictory—praising and criticizing the same event—to test audience reactions and sow confusion.

In an ironic twist, perpetrators even used ChatGPT to write internal performance reviews evaluating their manipulation efforts. These reviews not only validated the activity but provided OpenAI with valuable insight into their methods and operational structures.

The Sneer Review network went beyond social media manipulation. It also included email outreach—crafted using ChatGPT—targeting journalists, analysts, and political figures in an intelligence-gathering attempt. OpenAI notes that the patterns and behaviors described in the performance reviews closely aligned with the real-world activities their team uncovered.

The scale, sophistication, and audacity of these operations illustrate how AI is rapidly becoming a weaponized tool in geopolitical cyberconflicts. As generative AI tools become more accessible, the urgency to implement robust safeguards becomes paramount.

What Undercode Say: The Cyber Threat Landscape Behind the Screens

Weaponizing AI for Disinformation

The use of generative AI by nation-states for psychological and information warfare represents a paradigm shift. Unlike traditional propaganda that required human labor, AI allows for rapid generation of context-aware content at scale—flooding platforms with fabricated perspectives while mimicking real users.

Sneer Review: A Case Study in Sophisticated Digital Deception

OpenAI’s identification of Sneer Review paints a chilling picture of how digital propaganda is evolving. The operation didn’t merely post content—it simulated full conversations, created replies to its own posts, and framed content in contradictory ways. This deliberate ambiguity causes confusion, division, and erosion of trust in legitimate discourse.

Multilingual Targeting Increases Influence Radius

By using English, Chinese, and Urdu, these campaigns reached diverse audiences, tailoring messages to regional socio-political climates. This localization strategy marks a step-up in targeting accuracy and cultural manipulation, allowing propaganda to resonate more deeply with different demographics.

AI in Internal Espionage Logistics

What sets this apart from prior campaigns is the automation of internal logistics using AI. Performance reviews written by ChatGPT indicate not only the scale of operations but the bureaucratic approach to propaganda. This signals an emerging phenomenon where even internal evaluations and operational planning are AI-driven.

Implications for Democratic Institutions

These campaigns pose direct threats to democracies. By mimicking public opinion and influencing discourse, foreign actors can sway political climates, target vulnerable voter groups, or ignite unrest. Institutions must now account for AI-driven disinformation in electoral and civil stability risk assessments.

Platforms Must Strengthen AI Detection

Major platforms like TikTok, X, and Reddit must implement advanced AI-content detection algorithms to thwart such threats. Transparency reports, cross-platform data sharing, and global cooperation are essential to counter transnational influence operations.

Ethical Responsibilities of AI Developers

OpenAI’s transparency in exposing these operations is commendable. However, this also highlights the pressing need for AI companies to be proactive, embedding ethical checks, access limitations, and behavioral monitoring into public-facing models.

Global Response is Needed

As more countries explore AI-based operations, the lack of international regulation could spark a new digital arms race. The U.N., NATO, and other global bodies must prioritize discussions around AI ethics and establish protocols before such tools become the norm in geopolitical warfare.

✅ Fact Checker Results

Confirmed: ChatGPT was used by covert actors linked to China for social media manipulation.
Verified: Internal documents, including AI-generated performance reviews, were part of the operation.
Monitored: Targeted platforms include TikTok, X, Reddit, and Facebook.

🔮 Prediction

With generative AI tools becoming more powerful and accessible, AI-driven influence campaigns are likely to surge—not just from state actors, but from private interests and extremist groups. Social platforms, regulatory bodies, and AI developers must unite to develop real-time detection mechanisms, transparency protocols, and robust user education, or risk letting disinformation spiral beyond control.

References:

Reported By: 9to5mac.com
Extra Source Hub:
https://www.stackexchange.com
Wikipedia
Undercode AI

Image Source:

Unsplash
Undercode AI DI v2

Join Our Cyber World:

💬 Whatsapp | 💬 Telegram