Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

AI-Fueled Disinformation: How Pro-Russia Campaigns Leverage Free Tools for a 'Content Explosion'

10:52 AM   |   02 July 2025

AI-Fueled Disinformation: How Pro-Russia Campaigns Leverage Free Tools for a 'Content Explosion'

The AI-Fueled Surge in Pro-Russia Disinformation

In the ever-evolving landscape of information warfare, a significant and concerning trend has emerged: the weaponization of readily available, consumer-grade artificial intelligence tools by state-aligned actors. New research has cast a stark light on how a pro-Russia disinformation campaign is leveraging these accessible technologies to fuel what experts describe as a “content explosion.” This surge in fabricated material is strategically aimed at exacerbating existing societal tensions around critical global issues, including upcoming elections, the ongoing conflict in Ukraine, and sensitive topics like immigration.

The campaign, identified by various names such as Operation Overload, Matryoshka, and linked by some researchers to Storm-1679, has been active since at least 2023. Its alignment with the Russian government has been noted by multiple reputable organizations, including Microsoft and the Institute for Strategic Dialogue. The core tactic involves disseminating false narratives, often by impersonating legitimate media outlets, with the clear objective of sowing discord and division within democratic nations. While its reach extends globally, including targeting audiences in the United States, Ukraine has remained a primary focus. Hundreds of videos manipulated with AI have been deployed in attempts to bolster pro-Russian perspectives and undermine support for Ukraine.

A key finding of the recent report is the dramatic escalation in the volume of content produced by this campaign. Between September 2024 and May 2025, the output surged significantly, with this fabricated content reportedly garnering millions of views worldwide. This marks a concerning acceleration in the pace and scale of disinformation efforts.

The “Content Explosion” Driven by Accessible AI

The researchers documented 230 unique pieces of content promoted by the campaign between July 2023 and June 2024. This included a mix of pictures, videos, QR codes, and fake websites. However, the subsequent eight months witnessed a staggering increase, with Operation Overload churning out a total of 587 unique pieces of content. The majority of this recent output, researchers concluded, was created with the direct assistance of AI tools.

This spike is attributed directly to the accessibility of consumer-grade AI tools, many of which are available for free online. This ease of access has significantly empowered the campaign’s tactic of “content amalgamation.” This involves producing multiple pieces of content pushing the same core narrative, a process made far more efficient and scalable through the use of AI.

“This marks a shift toward more scalable, multilingual, and increasingly sophisticated propaganda tactics,” stated researchers from Reset Tech, a London-based nonprofit focused on tracking disinformation, and Check First, a Finnish software company, in their report. They emphasized that the substantial increase in production over the past eight months signals a clear move towards faster, more scalable content creation methods, fundamentally altering the operational dynamics of the campaign.

The diversity of content types employed by the campaign also surprised researchers. Aleksandra Atanasova, lead open-source intelligence researcher at Reset Tech, noted the campaign's expanded “palette” to approach stories from multiple angles, layering different types of content sequentially. Crucially, Atanasova highlighted that the campaign did not appear to be relying on bespoke or custom AI solutions but rather utilizing widely available AI-powered voice and image generators accessible to the general public.

Specific AI Tools and Tactics

While pinpointing every tool used by the campaign operatives proved challenging, researchers were able to identify one specific tool with high confidence: Flux AI.

Flux AI is a text-to-image generator developed by Black Forest Labs. By employing the SightEngine image analysis tool, researchers found a 99 percent likelihood that numerous fake images disseminated by the Overload campaign were generated using Flux AI. These images included fabricated scenes, such as those falsely depicting Muslim migrants rioting and setting fires in European cities like Berlin and Paris.

To validate their findings and understand the potential for misuse, the researchers were able to generate images that closely mirrored the aesthetic of the campaign’s output using prompts that included discriminatory language, such as “angry Muslim men.” This experiment starkly illustrated how AI text-to-image models can be abused to promote racism and fuel anti-Muslim stereotypes, raising significant ethical concerns regarding the behavior and outputs of different AI generation models.

Black Forest Labs responded to these concerns, stating that they build in multiple layers of safeguards to help prevent unlawful misuse, including provenance metadata designed to help platforms identify AI-generated content. They also emphasized their support for partners in implementing additional moderation and provenance tools. However, they acknowledged that preventing misuse requires layered mitigation and collaboration among developers, social media platforms, and authorities. Atanasova noted that the images reviewed by her team did not contain any such metadata.

Beyond static images, Operation Overload has also heavily utilized AI-voice cloning technology to manipulate videos. This technique allows operatives to make it appear as though prominent figures are saying things they never actually said. The volume of videos produced by the campaign saw a significant jump, increasing from 150 between June 2023 and July 2024 to 367 between September 2024 and May 2025. The majority of these videos created in the latter period were found to employ AI technology for deceptive purposes.

A particularly striking example occurred in February, when the campaign published a video on X featuring Isabelle Bourdon, a senior lecturer and researcher at France’s University of Montpellier. The manipulated video falsely depicted her encouraging German citizens to engage in mass riots and vote for the far-right Alternative for Germany (AfD) party in federal elections. The original footage, taken from the university’s official YouTube channel, showed Bourdon discussing a social science prize she had won. AI-voice cloning was used to completely alter her spoken narrative to fit the disinformation campaign’s agenda.

Photo illustration showing a figure in a hooded jacket reaching towards a screen displaying distorted faces, symbolizing disinformation.
Photo Illustration: WIRED Staff; Getty Images

Distribution Channels and Platform Responses

Once the fake and AI-generated content is created, Operation Overload employs various channels for distribution. Over 600 Telegram channels are used, alongside bot accounts on social media platforms like X (formerly Twitter) and Bluesky. In recent weeks, the campaign has also expanded its reach to TikTok for the first time. This was first observed in May, with a small number of accounts (just 13) managing to rack up 3 million views before the platform took action to demote them.

TikTok responded to the findings, with spokesperson Anna Sopel stating, “We are highly vigilant against actors who try to manipulate our platform and have already removed the accounts in this report.” She added that TikTok actively detects, disrupts, and works to stay ahead of covert influence operations, reporting progress transparently each month.

The researchers noted varying responses from other platforms. While Bluesky had suspended a significant portion (65 percent) of the fake accounts identified, X was reported to have taken minimal action despite numerous reports regarding the operation and increasing evidence of coordination among the accounts. X and Bluesky did not provide comments on the report’s findings.

The Counterintuitive Tactic: Emailing Fact-Checkers

Perhaps one of the most unusual tactics employed by Operation Overload is their practice of proactively sending emails containing examples of their fake content to hundreds of media and fact-checking organizations globally. These emails often include links to the AI-generated content hosted on various platforms, accompanied by requests for the recipients to investigate the authenticity of the material.

While it might seem counterintuitive for a disinformation campaign to alert those dedicated to combating false information about their activities, researchers suggest a strategic motive. For the pro-Russia operatives, getting their content featured by a legitimate news outlet—even if it is explicitly labeled as “FAKE” and debunked—appears to be a desired outcome. This could serve multiple purposes: it might expose the content to a wider audience than their own channels could reach, potentially confuse or overwhelm fact-checking efforts, or even be used later to claim that the “mainstream media” is talking about their narratives, regardless of the context.

According to the researchers, an estimated 170,000 such emails were sent to more than 240 recipients since September 2024. While the emails contained links to AI-generated content, the text of the emails themselves was not found to be generated using AI.

A Growing Threat: AI and the Future of Disinformation

Pro-Russia disinformation groups have been experimenting with using AI tools to enhance their output for some time. Last year, a group dubbed CopyCop, also likely linked to the Russian government, was found to be using large language models (LLMs) to create fake websites designed to mimic legitimate media outlets. Although these fake sites often fail to attract significant direct traffic, the accompanying social media promotion can draw attention, and in some instances, the fabricated information has managed to appear high up in Google search results.

A recent report from the American Sunlight Project estimated that Russian disinformation networks are producing a staggering minimum of 3 million AI-generated articles annually. This volume of low-quality, fabricated content poses a significant risk of “poisoning” the training data and outputs of AI-powered chatbots like OpenAI’s ChatGPT and Google’s Gemini, potentially leading these models to inadvertently reproduce or legitimize false narratives.

Researchers across various institutions have repeatedly documented how disinformation operatives are rapidly adopting and integrating AI tools into their workflows. As the capabilities of AI continue to advance, and as it becomes increasingly challenging for the average person to distinguish between authentic and AI-generated content, experts predict that the surge in AI-fueled disinformation campaigns will only continue to grow.

The findings regarding Operation Overload underscore a critical turning point. The barrier to entry for creating sophisticated, multimodal disinformation is being lowered by accessible AI tools. This allows campaigns to operate with greater speed, scale, and diversity than previously possible, making them harder to track and counter.

“They already have the recipe that works,” Atanasova commented, reflecting on the campaign’s apparent mastery of these new tactics. “They know what they're doing.” This suggests that the methods observed in Operation Overload are not experimental but represent a refined and effective approach that is likely to be replicated by other malicious actors.

The Implications for Society and Information Integrity

The implications of AI-fueled disinformation campaigns are profound. They threaten to further erode public trust in media and institutions, exacerbate political polarization, and potentially influence the outcomes of democratic processes. The ability to generate realistic deepfakes of political figures or create convincing fake news articles at scale poses a direct challenge to the integrity of the information ecosystem.

The focus on sensitive topics like immigration and elections is particularly insidious, as these are areas where emotions run high and false information can have immediate and tangible real-world consequences, including inciting social unrest or influencing voter behavior. The use of discriminatory prompts in image generation, as demonstrated with Flux AI, highlights how AI can be directly weaponized to amplify hate speech and prejudice.

Combating this threat requires a multi-pronged approach. AI developers face increasing pressure to implement robust safeguards, including provenance metadata and content moderation tools, although the effectiveness and widespread adoption of such measures remain challenges. Social media platforms must enhance their detection and enforcement mechanisms to identify and remove AI-generated disinformation, a task complicated by the sheer volume and evolving sophistication of the content. The varying responses of platforms like TikTok, Bluesky, and X highlight the uneven landscape of content moderation.

Fact-checking organizations and journalists play a crucial role, but they are increasingly overwhelmed by the scale of the problem, as evidenced by Operation Overload’s tactic of flooding them with content. New tools and collaborative efforts are needed to keep pace with the rapid generation of fake material.

Furthermore, public media literacy is more critical than ever. Educating individuals on how to identify potential deepfakes, verify information sources, and understand the tactics used by disinformation campaigns is essential to building resilience against these attacks. The ease with which free AI tools can be used means that the capacity to generate sophisticated fakes is no longer limited to state-level actors with significant resources; it is becoming democratized, accessible to a wider range of malicious players.

Looking Ahead: The Arms Race in the Information Space

The “content explosion” driven by AI is not merely an increase in quantity; it represents a qualitative shift in the disinformation landscape. The ability to quickly generate diverse, multimodal content tailored to specific narratives and audiences makes these campaigns more adaptable and potentially more persuasive. The narrative tone of the fabricated content, combined with realistic visuals and audio, can be highly effective in bypassing critical thinking and appealing directly to emotions and biases.

The research on Operation Overload serves as a critical warning. It demonstrates that the theoretical risks of AI being used for mass disinformation are already being realized in practice by sophisticated, state-aligned actors. The accessibility of the tools means that this is a threat that will continue to grow and evolve rapidly.

The ongoing effort to counter this threat is an arms race in the information space. As AI detection methods improve, disinformation operatives will likely find new ways to evade them, potentially by using more advanced or customized AI models, or by further refining their distribution and amplification tactics. The tactic of emailing fact-checkers, for instance, shows a willingness to innovate and exploit the very systems designed to combat them.

Ultimately, addressing the challenge posed by AI-fueled disinformation requires a coordinated global response involving governments, technology companies, civil society organizations, and the public. Without effective countermeasures and increased awareness, the “content explosion” risks overwhelming our ability to discern truth from falsehood, with potentially devastating consequences for democratic societies and global stability.

The findings from the Reset Tech and Check First report, corroborated by research from organizations like Microsoft, the Institute for Strategic Dialogue, Recorded Future, the American Sunlight Project, RUSI, and the Reuters Institute, paint a clear picture: AI has become a powerful new weapon in the arsenal of disinformation campaigns. Understanding how these tools are being used, the tactics employed, and the scale of the output is the first crucial step in developing effective strategies to defend the integrity of the information environment against this rapidly escalating threat.