Digital Directions: December 7, 2021


Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.

  • Disinformation researchers take a closer look at the focus and impact of Russian state media.
  • Facebook and Twitter are tweaking platform features amid mounting attention to offline harms.
  • Authoritarian models of cyber-sovereignty continue to collide with shared spaces online.
Share on Facebook Share on Facebook
Share on Twitter Share on Twitter
Forward to Friend Forward to Friend

How Does Local Media Spread Russian Disinformation in the Western Balkans?

Russian disinformation operations target information ecosystems by leveraging Moscow’s state-backed media outlets. However, a series of studies from the NATO Strategic Communications Centre of Excellence (StratCom CoE) in Riga, Latvia found that Kremlin-backed international outlets such as Sputnik and RT have a limited reach in regions like the Western Balkans. In such cases, the spread of misleading content rests on local Balkan branches of Russian news outlets and Western Balkan media organizations that promote the same material and themes as those backed by Moscow. Exploring this broader content-sharing ecosystem illuminates how Russian disinformation operations capitalize on information disorder and local media outlets in target countries.

As analysis from EUvsDisinfo highlights, one of the StratCom CoE report’s key findings is that only about 10 percent of the Western Balkans audience (Serbia, Bosnia and Herzegovina, North Macedonia, Kosovo, Montenegro, and Albania) turns to international media as its primary news source. In this context, the Belgrade-based subsidiary Sputnik Srbija is influential in spreading the Kremlin’s preferred narratives “mostly due to distribution of its content by local media.”

The study could not determine if the Kremlin organizes amplification of its narratives or national media pick up on narratives that are already circulating instead. This report did, however, conclude that local outlets in the Western Balkans re-publish Sputnik Srbija’s content or publish very similar content, with content spreading easily across countries thanks to language similarities. When news content spreads across the region, sometimes local outlets directly quote and cite Sputnik but other times they provide no source at all, leaving “little or no way for the audience to track if these are home-grown or Kremlin-driven” narratives.

Not all Western Balkan countries are equally susceptible to Russian disinformation. StratCom CoE measured the demand-side factors that make states more vulnerable to influence operations, including institutional weakness and low public trust. It identified the greatest “permeability” in Bosnia and Herzegovina, followed by Serbia. GLOBSEC’s Vulnerability Index similarly shows how factors such as public attitudes, the health of the information ecosystem, and the quality of public administration create openings for foreign influence. A European Parliament study has found that foreign disinformation campaigns in the Western Balkans “tend to build upon, amplify and, in some cases, manipulate the actions of domestic players.”

These reports remind us that susceptibility or resilience to foreign influence operations hinge on domestic social, political, and information environments and not only the commercial reach of state-affiliated media entities like RT. To build resilience, StratCom CoE’s analysis recommends investing in civil society organizations to enhance civic education and improve media literacy as part of a whole-of-society response to the pernicious effects of disinformation. Although there is no one-size fits-all solution for each state, cooperation and the sharing of best practices and lessons learned can benefit all and improve long-term resilience to disinformation operations.

– Daniel Cebul, International Forum for Democratic Studies



MAKING THE EU MORE RESILIENT TO DISINFORMATION: Latvian MEP Sandra Kalniete’s new report on foreign interference in European democratic processes encourages European leaders to be more ambitious and proactive in fighting foreign influence and disinformation. Kalniete highlighted the growing need for more Mandarin speakers and for expertise in other strategically important regions to address information manipulation and interference emanating from authoritarian states. When it comes to combatting disinformation, the report suggests the EU improve digital literacy and “strengthen its deterrence tools” to dissuade malign actors. For instance, Kalniete suggests developing a sanctions regime specifically designed for foreign interference and disinformation campaigns.

RUSSIAN STATE MEDIA REPORT THE FACEBOOK FILES: Brookings’ Jessica Brandt finds that Russian state-backed media are exploiting Facebook whistleblower Frances Haugen’s testimony to undermine support for democratic models of internet governance. Media outlets linked to the Kremlin, which is aggressively pressing tech companies to censor online speech, have claimed U.S. elites are advancing a “pro censorship, pro-control agenda” by calling attention to the need for more platform content moderation. On this basis, they seek to paint Washington’s support for free expression as hypocritical. Russian media are also using Haugen’s testimony to promote China’s model of internet governance, praising Beijing’s actions to “limit the power of this technology that will destabilize civil society around the globe.”



TWITTER SUSPENDS TRENDS FEATURE IN ETHIOPIA: Twitter shut down its Trends feature in Ethiopia temporarily  due to the “imminent threat of physical harm” caused by users “inciting violence or dehumanizing people.” Days before Twitter suspended Trends, Facebook removed a post from the country’s prime minister Abiy Ahmed for inciting violence when he called on supporters to “use every weapon and power” to “prevent, reverse and bury” rebel fighters in the Tigray region. Odanga Madung, a fellow at the Mozilla Foundation, said suspending the Trends feature was “in essence an admission of guilt… it means that [Twitter] recognize[s] that it is used consistently for harm, and that their algorithm is in some ways complicit in amplifying that harm.”

META WILL BLOCK SOME TARGETED AD CATEGORIES: Meta, the parent company of the platform Facebook, announced that starting January 19 advertisers will no longer be allowed to target ads on the company’s platforms based on users’ engagement with content linked to traits like race, ethnicity, political beliefs, religion, or sexual orientation. The ban will prevent organizations from running ads targeted for terms like “same-sex marriage,” “Jewish holidays,” and “Catholic Church.” However, Meta will still allow targeted advertising for other categories, which critics say could enable advertisers to continue discriminatory practices. This is not the first time the company has adjusted its targeted advertising algorithm to limit discrimination and abuse. Facebook previously disabled anti-Semitic and other ad categories, some of which have been subjects of legal pressure and media scrutiny.

FACEBOOK DRAGGING ITS FEET ON HATE SPEECH REPORT IN INDIA: Human rights groups contributing evidence to a Facebook-commissioned study investigating hate speech on the platform in India claim the company is “trying to kill” the report. Researchers claim Facebook is seeking to disrupt the independent report—which a U.S. law firm is producing—by narrowing the study’s definition of hate speech and raising other technical objections. Indian-diaspora–led groups that have provided evidence of hate speech, including horrifying calls for violence against Muslims, report that much of the material remains on the platform. Ritumbra Manuvie, the co-founder of one advocacy group, said Facebook’s “lack of oversight” has “normalized dehumanization and hate speech against Indian Muslims.”



PLATFORMS UNDER GOVERNMENT PRESSURE IN CENTRAL ASIA: Uzbekistan briefly expanded its blocking of social media and messaging services. This move, nominally a response to platforms’ refusal to localize user data in country, was quickly reversed amid public outcry. Neighboring Kazakhstan drew attention with a claim to have gained special access to Facebook mechanisms for reporting prohibited content, something the platform quickly denied. A recent Washington Post editorial argues that Central Asian governments are importing foreign legal and technical models of “cyber-sovereignty,” a trend that Christopher Walker, Shanthi Kalathil, and Jessica Ludwig have highlighted as part of the rise of authoritarian counternarratives in the tech domain.

WHEN ALGORITHMS DECIDE ON SOCIAL SERVICE PROVISION: A recent report from Human Rights Watch (HRW) warns that European countries’ use of AI and other algorithmic decision making systems to manage social benefits distribution poses unaddressed risks of discrimination and exclusion. In HRW’s analysis, applying algorithmic systems to check identity, assess eligibility, or—as with a recently discontinued system in the Netherlands—calculate fraud risk presents dangers not fully covered by the EU’s pending AI regulation. (The draft text bars governments from engaging in “social scoring for general purposes.”) Earlier reporting by Privacy International with local partners in India, Uganda, Columbia, and Bolivia underscores the global scope of this challenge, as many governments have intensified their reliance on automated systems in these domains since the start of the COVID-19 pandemic.

A VIRTUAL REALITY SPLINTERNET? The launch in China of a “Metaverse Industry Committee,” operating under the auspices of a state-supervised body, raises questions about how PRC metaverse ambitions will intersect with Beijing’s tightly controlled cyber-governance model. PRC tech giants such as WeChat developer Tencent show keen interest in the newly trendy metaverse concept, involving the use of virtual or augmented reality technology to create shared spaces and experiences. State media, however, have tried to curb enthusiasm for metaverse-related stocks, and state-affiliated researchers are focusing in on security risks. The president of Tencen —a company now under severe regulatory pressure—has stated that the metaverse in China can enjoy government support, but will need to follow Beijing’s particular rules.




This week the Forum also launched its Sharp Power Research Portal. The portal is a database that catalogues research and reporting on sharp power in five sectors: media and information; commerce; culture and entertainment; knowledge generation; and technology. Currently, the Portal catalogues 372 research projects, reports, and articles related to the challenges of sharp power in the technology and media sectors.

The International Forum published a new Spanish translation of Samantha Hoffman’s report exploring how the People’s Republic of China (PRC) leverages emerging technologies to undermine the stability and legitimacy of democracies and expand its own influence.


Thanks for reading Digital Directions, a biweekly newsletter from the International Forum. 

Sign up  for future editions here!


International Forum's Website