Meta Unveils Disruption of Covert Influence Operations from Iran, China, and Romania

Listen to this Post

Featured Image
Meta, the parent company of Facebook and Instagram, recently unveiled in its Adversarial Threat Report that it successfully disrupted three covert influence operations originating from Iran, China, and Romania during the first quarter of 2025. These operations sought to sway public opinion across multiple platforms, including Meta’s own services, TikTok, X (formerly Twitter), and YouTube.

The social media giant stated that it had removed the campaigns before they could amass a substantial, authentic following, ensuring the malicious actors’ plans to manipulate online discussions remained thwarted. Let’s take a closer look at these influence operations and what Meta discovered.

Meta’s Adversarial Threat Report

In its quarterly Adversarial Threat Report, Meta detailed three major influence operations it disrupted during the early part of 2025. The first campaign targeted Romania and involved a network of 658 fake accounts, 14 pages, and two Instagram accounts. These accounts, designed to look like local Romanian users, primarily posted content related to sports, travel, and local news to blend into the native discourse. They also attempted to sway public opinion by posting comments on political and news entities’ posts, creating the illusion of widespread engagement. Despite the lack of genuine interaction from real users, the operation used multiple platforms to simulate credibility.

The second campaign originated from Iran, aiming to influence Azeri-speaking communities in Azerbaijan and Turkey. It consisted of 17 Facebook accounts, 22 Pages, and 21 Instagram accounts. These accounts mainly focused on hot-button political issues, including the Israel-Palestine conflict and international boycotts. The operators behind this campaign cleverly masked their tactics by posing as female journalists and activists, using popular hashtags to gain attention and insert themselves into the online discourse. This operation was linked to a known threat group called Storm-2035, which had previously targeted U.S. voter groups.

Finally, Meta disclosed that it had dismantled a third influence network originating from China, which targeted audiences in Myanmar, Taiwan, and Japan. The Chinese-backed operation included the use of AI-generated profile photos to run fake accounts, which spread content critical of Taiwan’s government and the military junta in Myanmar, among other targeted narratives. The actors behind this network aimed to sway public opinion through manipulated content in local languages, including Burmese, Mandarin, and Japanese.

Meta’s findings underscore the global nature of influence operations and the sophisticated methods used by state-backed actors to manipulate social media platforms. By using fake accounts, targeted content, and artificial intelligence, these operations aim to create division and sway public opinion under the guise of authenticity.

What Undercode Says:

The recent findings in Meta’s quarterly Adversarial Threat Report highlight a disturbing trend of state-backed manipulation across social media platforms. What we see is a growing sophistication in how these influence operations are executed. The use of fake accounts, proxy IPs, and AI-generated profiles shows that threat actors are evolving with the technological landscape, making it increasingly difficult for platforms like Meta to identify and shut down these networks.

One significant takeaway here is the use of operational security (OpSec) tactics to hide the origin and coordination of these campaigns. The Romanian campaign, for example, took extensive steps to cover its tracks, using proxy infrastructure and local language content to make the operation appear authentic. This highlights how influential players are adopting professional-level security measures, making their efforts harder to detect by conventional means.

Another notable aspect is the targeting of specific political events and issues. The Iranian operation, for example, focused on polarizing topics like the Israel-Palestine conflict and the Paris Olympics. The aim here was not only to manipulate local political views but to expand the reach of divisive narratives that could lead to wider international ramifications. The use of social media to inject these messages into public discourse can be incredibly powerful, especially when backed by coordinated botnets and fake identities.

The Chinese operation also raises questions about the increasing use of AI in manipulating public opinion. The AI-generated profile photos used by these fake accounts show how rapidly technology is being harnessed for malicious purposes. As AI continues to advance, it’s likely that we’ll see even more convincing and covert methods of influence in the future.

Moreover, the fact that Meta was able to identify and remove these operations before they could amass significant influence is a positive sign. However, it also serves as a reminder of the ongoing battle between tech companies and threat actors. As these operations become more sophisticated, Meta and other platforms must continue to adapt their detection systems to stay one step ahead.

Fact Checker Results:

Romanian Influence Operation: Meta identified and shut down a Romanian operation utilizing 658 fake accounts to manipulate public opinion. The accounts masqueraded as locals posting politically charged content but lacked genuine engagement.
Iranian Operation: A network from Iran targeted Azeri-speaking users, spreading pro-Palestinian and anti-U.S. rhetoric. This operation used fake personas, including female journalists, to manipulate online discussions.
Chinese Operation: Chinese-backed activity targeted Myanmar, Taiwan, and Japan, leveraging AI to generate fake profiles for spreading divisive content related to political regimes and international relations.

Prediction:

As we look to the future, it’s likely that state-sponsored influence operations will continue to evolve in both scope and sophistication. The use of AI, fake accounts, and coordinated campaigns across multiple platforms will likely increase. Social media companies, governments, and independent watchdogs will have to intensify their efforts to combat these threats. The battle against digital manipulation is ongoing, and as technology advances, so will the tactics of those seeking to sway public opinion covertly. Expect even more elaborate and hard-to-detect operations to emerge in the coming years.

References:

Reported By: thehackernews.com
Extra Source Hub:
https://stackoverflow.com
Wikipedia
Undercode AI

Image Source:

Unsplash
Undercode AI DI v2

Join Our Cyber World:

💬 Whatsapp | 💬 Telegram