2025 saw coordinated attempts to undermine democratic processes across Central and Eastern Europe. Will Hungary and Bulgaria be the next battlefields?
One year ago, the U.S. government’s move to massively cut back on foreign aid programs hit Europe hard, leaving civil society in a state of shock. The Trump administration systematically started cutting off funds to all manner of NGOs across the region. For many small independent news and research organizations, these funds were essential to do the work that they had been urged to do by organizations like USAID and the National Endowment for Democracy (NED). Much of this funded work was to mitigate hybrid threats to democracies across the globe, including coordinated campaigns by state and non-state actors to manipulate and interfere in political processes. In the words of the Reuters Institute for the Study of Journalism, European civil society and independent journalism found themselves “shattered by a perfect storm.”
The immediate financial toll the USAID cuts took on just the EU countries in Central and Eastern Europe could be as high as $30–$35 million annually. The United States has been a key source of funding for independent journalism in this region since the 1980s. However, the borders of the information space are neither static nor solid. Information in the region bounces out to other non-EU countries, like Ukraine, the country that saw the largest absolute cut in USAID money. Meanwhile, EU candidate Moldova has faced a turbulent year of political polarization, disinformation campaigns, and external interference that have tested the resilience of its democratic institutions and civil society.
“In countries with a somewhat more mature local philanthropy and grant-making ecosystem, such as the Czech Republic and Poland, NGOs seem to have been better able to cushion the shock by starting to more proactively get funds from domestic donors, EU programs, and Nordic/EEA mechanisms,” said Marius Dragomir, director of the Media and Journalism Research Center (MJRC) think tank. “In contexts where independent civil society was already under political pressure and local philanthropy is weak, such as Hungary or Bulgaria, the freeze has been much more disruptive. Here, many NGOs and independent media relied on a small pool of foreign donors, among which USAID was key.”
With critical elections in Hungary scheduled for 12 April, and the latest in a string of snap elections in Bulgaria a week later, there is an opportunity to examine what policy makers, journalists, and researchers have learned from a year when disinformation and influence campaigns coordinated by foreign actors took on new shapes and sizes and increasingly made use of artificial intelligence.
Closely Watched Elections
Central and Eastern Europe saw several critical elections in 2025, each rife with attempted foreign information manipulation and interference (FIMI), analysts say, though with varying responses.
High on the watchlist were the federal elections in Germany, parliamentary elections in Moldova and the Czech Republic, and presidential elections in Romania and Poland. These contests were followed closely across Europe. In some cases, they underscored a deeply polarized political environment; in others they revealed how disinformation around the war in Ukraine can influence politics in nearby countries.
FIMI on the Rise
Hybrid threats have become an instrument of geopolitical warfare, using information manipulation as a tool to bend political outcomes and sentiments in society. The consensus of numerous studies is that pro-Kremlin actors coordinate these attacks in Europe. Further dissemination by Russian diplomatic circles and friendly media, influencers, and social media channels only amplify these messages.
“Following the trends already observed in 2024, the use of artificial intelligence (AI) in FIMI is growing, especially for content production. … Video and audio are frequently used, in particular in [an] electoral context, to attack specific individuals. AI is also used for mass production and automated translation, with the objective of expanding the reach of the FIMI actions,” an EU spokesperson, whom the EU’s press service didn’t wish to be further identified, said.

Experience Gained From 2025’s Elections
The latest published report by the European Union External Action Service (EEAS) identified over 500 FIMI incidents across 25 different websites and social media channels and accounts in 2024, through 38,000 different accounts. With the growing potency of artificial intelligence, producing ever higher-quality content, we can expect these numbers to be significantly higher in the next report.
The report names Russia and China as the main sources of disinformation aimed at Europe.
“Recent Russian FIMI narratives have emphasized Russia’s supposed invincibility along with Western weakness and lack of unity. Responsibility for the failure of peace negotiations was consistently denied and redirected toward Ukraine. Europe has replaced the United States as the primary target of attacks,” the EU spokesperson said.
Online social media platforms play a significant role, functioning as a sandbox where Kremlin-linked actors try out different methods of information manipulation. With extensive political campaigning online, these efforts can help win or lose elections. In the EU, the Digital Services Act (DSA) has been one of the tools employed to counter foreign-based disinformation and misinformation on social media platforms, with more strict requirements in mitigating information manipulation for very large online platforms such as Facebook, TikTok, and X.
Before we look at the different ways in which information manipulation took place across Europe in 2025, it is important to note that while these cases might seem like isolated incidents, analysis reveals many layers of interconnection. As Maksym Beznosiuk and other security analysts have concluded, Russian hybrid operations in Europe are a coordinated effort to destabilize the continent, its democratic institutions, and partnerships.
A Mess in Moldova
The Romanian presidential election was mired in controversy from the start. In the first round in November 2024, social media platforms helped boost far-right candidate Calin Georgescu’s campaign to an unexpected victory, with his support surging from about 5% to 23% in just three weeks, aided by more than 25,000 accounts, many on TikTok. The Constitutional Court later annulled the election, citing credible evidence of foreign interference and campaign irregularities. When the vote was rescheduled for May 2025, Romanian authorities barred Georgescu from running in the rerun.
Then came the parliamentary elections in Moldova, held on 28 September 2025. Leading up to the elections, the central disinformation narratives were mainly designed to incite fear of the country losing its sovereignty and being “dragged into war” by the pro-European government. In contrast, consumers of such content were assured that pro-Russian parties and oligarchs like Ilan Shor represented the true Moldovan identity. Domestic pro-Russian manipulation amplified disinformation directly from Russia, and the Central Electoral Commission ended up disqualifying two political parties linked to Shor on allegations of illegal financing and voter bribery. (In the past, the EU had sanctioned Shor’s Victory Party and two other entities linked to him over vote-buying and spreading misinformation during the 2024 referendum on EU membership; and Canada had imposed sanctions on 16 persons and two entities associated with Shor, citing Russia’s “malign interference activities.”) The elections in Moldova were a showcase of coordinated inauthentic behavior: researchers concluded that AI-driven bot networks posted content, engaged with each other’s content, and flooded the information space.
The Moldovan authorities’ efforts to reverse the flow had some success. During the election period, the country established direct communication with the operators of digital platforms to ensure real-time responses. The General Police Inspectorate blocked hundreds of TikTok accounts following official requests to block 443 accounts, with 1.2 million followers and 4.5 million views. The National Police launched their own awareness campaign, “Don’t Play With Your Vote.”
In comparison, the Czech parliamentary elections on 3-4 October were calm, at least in terms of FIMI, according to the FIMI-ISAC report on the poll. Despite numerous Kremlin-linked channels, state institutions and the public responded in a way that seemed more prepared for interference from abroad, anticipating disinformation narratives alleging electoral fraud (especially in the newly introduced postal vote) or censorship. While FIMI researchers uncovered vulnerabilities, these were largely examples of public distrust in state institutions and inaction from platforms in limiting bot activities on TikTok and X. The FIMI-ISAC report also points out that the information space in the Czech Republic is closely connected to Slovakia, mainly because of the language proximity, resulting in spillover of Slovak-based Telegram channels spreading pro-Kremlin narratives. Collaboration between the two countries on fighting the spread of these narratives was limited, and channels such as the FSB–linked NewsFront SK operated without restrictions.
Lastly, Poland’s presidential elections showed a consistency with election years 2017, 2021, and 2025, avoiding the trend of large-scale AI-powered disinformation campaigns, unlike the other countries in the region. Importantly, the general consensus in Poland remains anti-Russian. Attempts by the so-called Doppelganger operation to spark public criticism of military spending did not resonate, as polling data showed. At the end of 2025, however, AI-generated videos of young women delivering far-right messages, including calls for Poland to leave the EU, gained many views on TikTok, in what Deputy Minister for Digital Affairs Dariusz Standerski called a test run before the parliamentary elections in 2027.

Eyes on Hungary and Bulgaria
In the EU and CEE region, Hungary’s civil society has arguably been most impacted by the USAID cuts, as the country was already experiencing strong pressure on NGOs and independent media, with the effects noted by the World Justice Project’s rule-of-law index, among others, which ranks Hungary last in the EU. The government has also consistently opposed sanctions against Russia and stalled decisions on Ukraine on the EU level.
Hungary occupies a category of its own in terms of threats to a healthy civil society. Prime Minister Viktor Orban and the governing Fidesz party have captured the media landscape in the country, including the public broadcaster, and have co-opted huge swaths of the private media to closely align with their policies. The authorities have implemented an extensive intimidation campaign against NGOs and independent media, especially those funded from abroad. There is a growing sense that with a weakened civil society and few independent media voices, resilience toward hybrid threats has steadily declined under Orban’s government. “I’ve noticed that the increased visibility of ‘disinformation threats’ can even fuel polarization or fatigue, making some people more cynical rather than more resilient,” MJRC’s Dragomir said. “In spite of that, however, economic problems and cost‑of‑living pressures have made parts of the electorate more receptive to alternative political options, which raises significant concerns for Fidesz ahead of the 2026 elections.”
Hungary’s anti-Western, pro-Russian narratives have been a part of political communication in the country for a while now. Experts have likened the political and personal smear campaign against the currently poll-leading challenger to Orban, Peter Magyar, to Russian tactics. Most recently, Magyar called out Fidesz for organizing the release of an alleged sex tape, involving Magyar and his ex-girlfriend. While at the time of writing, the video was still not published, it would be merely an addition to the plethora of aggressive campaigning and AI-generated videos from Fidesz ahead of the upcoming elections.
Meanwhile, Bulgarian voters are gearing up for the eighth snap election since 2021. The country finds itself in a vulnerable position, just after joining the eurozone this year and still battling entrenched corruption, with parliamentary elections coming up in April followed by presidential balloting later this year. A pre-bunking report by the Bulgarian-Romanian Digital Media Observatory identified narratives that are likely to appear: foreign intelligence services had organized protests that led to the snap elections; electronic voting could be manipulated; and even that the Constitutional Court would annul the results just as in Romania. More anti-EU sentiments including attacks on the euro, as well as pro-Russian voices in general are also likely to surface.
Flagging Suspicious Content Is Just the Beginning
As these elections draw near, the EEAS is currently monitoring Russia and China as FIMI actors, and facilitates the Rapid Alert System (RAS), or G7 Rapid Response Mechanism (RRM), which is coordinated with EU member states and institutions using unified terminology and frameworks. Coordination and weeding out duplicate reporting remains a challenge, as monitoring efforts grow. The primary responsibility for addressing FIMI remains a task for the member states and internet providers.
In the ideal case, real-time monitoring starts through verification of facts and narratives on social media platforms and fringe media and is coordinated with partner organizations who also work on election monitoring. Analysis and attempts at attribution follow. The last step is public communication and advocacy, communicating with the broader public and calling for timely action (for instance, requesting platforms such as X, Meta, and TikTok to remove suspicious content). Here we reach the limits of domestic flagging and analysis, which remains just in that stage when no action from the platforms is taken.
Even though subject to EU bans, Russian websites like RT or Sputnik are still available in the EU through mirror websites. The issue is a technical one, as blocking can be done on multiple levels. “The biggest problem is that the European Commission does not maintain a list of the domain names,” said Domician Zahorjan, co-founder and analyst at the NEST Institute in Bratislava. “Neither the member states nor the European Commission have the resources to actively monitor whether RT or Sputnik is trying to bypass [the ban] by creating a mirror domain.”
Working with social media platforms on flagging malicious actors doesn’t always work. Even when willing to act, platforms often cannot keep up with the pace of suspicious content in the runup to an election. The volume of artificial content is sometimes produced in such volumes that it acts like a “disinformation bomb,” said Attila Biro, an investigative journalist and founder of Context Romania.
“Online, you need to have a clear distinction … between real people and these operations that are artificial. And unfortunately, the biggest challenge we have ahead is that we cannot make a distinction. We do not have clear accountability from the platforms to distinguish artificial traffic,” said Madalina Voinea, an analyst from Expert Forum in Romania.
Besides flagging content on social media platforms, countries can also block entire websites used to spread disinformation. For example, In 2023 and again ahead of the referendum on EU accession in 2024, the Moldovan Intelligence and Security Service banned (mostly) Russian websites, citing “national security risks.”
The Attribution Problem: Who Bears Ultimate Responsibility?
The immediate result of FIMI is a flooded information environment. The long-term impact, however, goes much further. Fragmented audiences, declining trust in the media, and the rise of alternative information channels without proper regulation or adherence to journalistic standards are just part of the picture.
Attribution remains the soft spot of FIMI responses. “Today’s operations are structured to be anonymous and hard to attribute to a specific actor,” said Zahorjan. “While the manipulation – coordinated messages, bot accounts, and cross-platform pollution – can be proven, conclusive attribution to a specific state actor remains challenging.” While some channels that spread FIMI and disinformation make no secret of their Kremlin sympathies, many others try to obscure the identities of the people behind them. Attribution is especially difficult as the operations grow both in complexity and magnitude, aided by the spread of generative AI. While in the past, researchers worked by searching for identical content across channels, nowadays, they might easily find 1,000 different posts, all spreading the same narrative but using different words.
Simultaneously, “proxy” networks are often used, often in the form of local influencers, fringe news sites, and Telegram channels that act as “narrative laundries.” By the time a narrative reaches a human at the end, it appears to be a personal opinion, rather than something originating in Moscow.
“The information environment is full of content aligned with the claims of the Russian government. However, alignment itself is not enough, as genuine conviction and a real person could be on the other side of the screen. … Analysts must look beyond the message and beyond the account to establish whether it is part of a bigger operation or not,” said Zahorjan.
Attribution can be done based on three types of evidence: technical (which focuses on domain ownership, IP data, economic ties), behavioral (account activity, posting patterns, tactics), and contextual (content, language). While some of this work can be done using open-source information, only private companies or intelligence services collect and own other data, which they may not want to share.
To aid the efforts on attribution, the EU Digital Services Act (DSA) can act as a legal backbone. Under DSA, very large online platforms are now required to provide vetted researchers with access to their internal data, which allows the analysis of automated patterns and coordinated structures.
Media analyst Dragomir, while praising the expanding efforts to monitor and stem FIMI in Europe, suggests that in the long run, resources could be directed more effectively.
“The EU has indeed expanded its spending and policy toolbox in this field. Over the past years it has funded fact‑checking networks, media literacy projects, monitoring hubs, and new calls for research and capacity‑building against information manipulation. It has also adopted sanctions and diplomatic measures against foreign actors engaged in malign information operations,” he said.
At the same time, “this approach still feels more reactive and fragmented than strategic,” said Dragomir. “Much of the money continues to flow into short‑term projects focused on mapping disinformation, monitoring narratives, or running campaigns, rather than into long‑term incentives that strengthen independent media markets and pluralistic public spheres.”
With key elections coming this spring, FIMI experts agree that nuanced analysis of past tactics that bad actors have used to disrupt democratic processes is vitally needed. No less critical is better communication among researchers, official bodies, and the media, to better inform the public both of the dangers and of successful efforts to mitigate them.
This report was written by Tamara Kanuchova, a journalist at the VSquare consortium of nonprofit investigative journalism outlets in Central Europe, with editorial support by Graham Griffith (Transitions).
