In 2025, Russia significantly increased its disinformation operations using artificial intelligence (AI), according to the latest report by the European External Action Service (EEAS). These campaigns, targeting Ukraine and other European countries, highlight a growing trend in AI-powered information manipulation that poses serious challenges for governments, media, and the public alike.
Understanding AI-Driven Disinformation
AI-driven disinformation involves using generative technologies to create large volumes of misleading content, including:
AI-generated text – articles, social media posts, and fake news reports.
Synthetic audio – fabricated voice recordings mimicking real individuals.
Manipulated video (deepfakes) – realistic videos showing events that never occurred.
The EEAS report shows that 27% of disinformation incidents analyzed in 2025 involved AI-generated content, allowing hostile actors to produce more material faster and with fewer resources. These methods make it increasingly difficult to distinguish fact from fiction online.
Scope of Russian Operations
The disinformation campaigns targeted roughly 10,500 social media channels and websites, with Ukraine as the primary focus. The aim was to undermine trust in Ukraine’s leadership, weaken international support, and influence public perception in neighboring countries.
According to the EEAS findings:
29% of incidents were attributed to Russia.
6% to China.
The remaining 65% remain unattributed.
Many of these campaigns coincided with major political events, elections, protests, and international crises, highlighting how critical moments are especially vulnerable to manipulation. Countries affected included Germany, Poland, Romania, Moldova, and the Czech Republic.
Why AI Makes Disinformation More Dangerous
Traditional disinformation requires human effort to write, post, and distribute content. AI changes this by:
Increasing speed and scale – More content can be produced in less time.
Reducing cost – Automated generation lowers resource requirements.
Enhancing realism – Deepfake videos and synthetic audio are harder to detect.
These factors make AI a powerful tool for state and non-state actors seeking to manipulate public opinion, influence elections, or destabilize societies.
Real-World Examples
The EEAS report cited multiple campaigns with fabricated narratives, such as:
Allegations that Ukrainian drones targeted civilians in Russia’s Belgorod region.
Emotional, unverifiable stories designed to provoke fear or outrage.
While these claims were false, they spread rapidly online, demonstrating the efficiency and reach of AI-assisted disinformation networks.
How Governments and Citizens Can Respond
Addressing AI-powered disinformation requires both technological and educational approaches:
Governments can invest in AI detection systems and monitor social media platforms for suspicious content.
Media organizations should fact-check information rigorously and alert the public to false narratives.
Citizens can verify sources, cross-check news, and be cautious of content that evokes strong emotional reactions without credible evidence.
Education and awareness are key to reducing the impact of AI-driven propaganda on society.
The Long-Term Implications
As AI technology advances, the scale and sophistication of disinformation campaigns are expected to increase. Experts warn that generative AI will continue to lower the barriers for content manipulation, posing global risks to democracy, public trust, and international security.
Building digital literacy and investing in AI detection tools will be essential strategies for governments, media outlets, and individuals to mitigate the harmful effects of these campaigns.
