Digital Directions: November 12, 2021

Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.

  • Influence campaigns linked to authoritarian state actors take aim at critics abroad.
  • A Facebook announcement on facial recognition leaves unanswered questions about its parent company’s metaverse plans.
  • The burgeoning private cyber surveillance industry is drawing more attention from analysts and policy makers.


The International Dimensions of the Facebook Papers

The Facebook Papers, the product of a consortium of networked investigative journalists working with at least two Facebook whistleblowers, are filled with explosive revelations about the company’s inability to control misinformation, hate speech, and extremism on its platform. While we wait for more of the leaked original documents to make their way past journalists and into the general public, as promised by some in the consortium, the available reporting already reveals three key trends on the international side of Facebook’s practices. First, local staff are overwhelmed and under resourced; second Facebook consistently prioritized growth over integrity in a way that disproportionately impacted emerging markets; and third, company culture favored half-measures over more aggressive steps.

1. Local fact-checking organizations expressed frustration about the lack of technical resources and staffing to sort through the maze of hate, disinformation, and misinformation appearing on their screens. Certain technical tools, like classifiers to automatically detect concerning information in local languages, weren’t deployed to “at-risk countries” like Ethiopia. One staff member with a Facebook-affiliated fact-checker had to leave the country due to local intimidation. Researcher Berhan Taye says Facebook has failed to take down much of the problematic content she and a volunteer team have flagged. The Atlantic reports that a meager 13% of Facebook’s staff hours for moderating misinformation were devoted to areas outside the United States, which constitute 90% of its user base. Facebook researchers knew that extremism, sex trafficking, and hate speech were rampant in Arabic language spaces on the platform, but Facebook management was slow to scale up its content moderation teams needed to address this problem. Differences between dialects, hiring problems, and inconsistent enforcement plagued the platform’s response.

2. Emerging markets bore the brunt of Facebook’s decision to prioritize growing its user base and creating “meaningful social interactions” through its algorithm. In countries like Burma and Ethiopia, where internet usage was low when Facebook arrived, Facebook quickly became the primary lens through which many people viewed current affairs. Content moderation, translation, and hiring lagged. In some cases, Facebook offered foreign language versions of its platform, but didn’t offer translated versions of its content moderation tools or hate speech reporting forms. As a result, Facebook enabled systemic violence against Rohingya Muslims in Burma, and inflamed civil conflict in the Tigray region of Ethiopia. In Poland, a marginal extremist party took advantage of the boost Facebook’s algorithm gives “emotional messages” to gain the most followers of any Polish party on the platform.

3. Political and public relations considerations induced Facebook to favor half-measures for addressing known problems. Hate speech by a prominent Indian politician stayed up because the platform anticipated that banning him would trigger a “backlash” in this highly coveted market; Facebook did ultimately ban the politician, but not before his (and other) inflammatory rhetoric repeatedly filled Indian news feeds. Similarly, Facebook was slow in reacting to concerns about a 2017 algorithm tweak that gave emoji responses to posts five times the weight of a “like” when assessing whether to show a post to users. There were immediate concerns that since “angry” emojis were included, this change could amplify provocative or unverified content. Yet the platform downgraded the angry emoji’s weight (now zero) only in a series of half-steps over the next three years. Facebook decision makers rejected a number of internal proposals along the way to quicken the de-weighting of emotive content.

In 2014, Facebook dropped its early motto of “Move Fast and Break Things” in favor of broader principles. The Facebook Papers show that since 2014, that motto has still applied as Facebook has rushed recklessly into new information markets around the world.

— Kevin Sheives, Associate Director, International Forum for Democratic Studies

MEXICAN PR FIRM SPREADING HONDURAN ELECTION DISINFORMATION: A Honduran researcher unearthed evidence that a Mexico-based PR firm with ties to Honduras’s ruling National Party is operating a network of websites and Facebook pages spreading disinformation in advance of the Honduran presidential election in November. Although disinformation campaigns are not uncommon in Honduras, this investigation is one of the first to directly implicate a Latin American firm. The PR company used the common tactic of creating imitation news websites that mimicked the design of authentic Latin American news outlets, in addition to posting false stories to Facebook pages. Recently, Facebook whistleblower Sophie Zhang discussed the challenges she faced in convincing company management to take action against inauthentic behavior in support of the National Party in Honduras.

GENDERED DISINFORMATION IN GERMAN ELECTIONS: Research by the German Marshall Fund’s Alliance for Securing Democracy and the Institute for Strategic Dialogue shows that Annalena Baerbock, the Green Party’s candidate for chancellor, was targeted by gendered disinformation campaigns and greater levels of online harassment than her major male competitors. Russian state media also promoted negative coverage of Baerbock, who has advocated a sterner German policy stance toward Moscow. Gendered disinformation campaigns draw on sexist tropes and often spread fake images in order to undermine their target. Although such attacks are not new, the Wilson Center’s report Malign Creativity shows that they are increasingly being used by authoritarian state media outlets to target critics at home and abroad.

PRC-LINKED WECHAT ACCOUNTS SPREAD ELECTION DISINFORMATION IN CANADA: Social media accounts operated by a state-linked PRC translation firm, as well as other accounts echoing official state media narratives, sought to influence Chinese diaspora voting in Canada’s September 2021 elections. DFRLab found numerous articles on WeChat, a popular news and communications platform for members of the Chinese diaspora, that targeted Conservative candidate Kenny Chiu with misleading claims that a bill on foreign agent registration he supported would lead to general repression and surveillance of Chinese communities in Canada. Although it is unclear whether this interference changed the outcome of the election (Chiu failed to maintain his seat in parliament), Chiu said he was targeted because of his critical stances toward the Chinese Community Party.

YAHOO BACKS OUT OF CHINAYahoo announced it has pulled out of China due to “the increasingly challenging business and legal environment.” This move comes on the heels of similar announcements from Microsoft’s LinkedIn career networking platform and Epic Games, and amid growing regulatory pressure from Beijing on both domestic and foreign tech companies. It also coincided with China’s Personal Information Protection Law coming into force on November 1. The law requires companies to store data in country and pass security assessments before transferring information across borders, or face multi-million dollar fines. Yahoo may be seeking to avoid a repeat of past mistakes when dealing with the Chinese Communist Party and users’ data; in 2007 the company faced lawsuits as well as popular and political blowback in the United States after turning over to PRC authorities data on two Chinese dissidents who were then arrested and imprisoned.

FACEBOOK SHUTTING DOWN FACIAL RECOGNITION SYSTEM: Facebook, recently renamed Meta, announced it would be shutting down the social networking platform’s facial recognition system. Introduced in 2010 to make tagging friends in images easier,  the software has been criticized for undermining user privacy. It was the subject of a $650 million class action lawsuit and has been accused of violating data protection laws.  Facebook said it would delete the face scan data of its more than one billion users. However, the company does not plan to eliminate the DeepFace algorithm that powers the system. It is leaving open the possibility of building facial recognition technology into future products—a significant stance given the company’s plans to pivot toward building a “metaverse” that hinges on the collection of biometric data. Privacy advocates are also encouraging Facebook to do more to keep facial recognition companies like Clearview AI from scraping it for photos to power their own facial recognition systems.

TWITTER PREBUNKS CLIMATE MISINFORMATION: Anticipating an increase in climate disinformation during this year’s United Nations COP26 climate summit, Twitter is attempting to counter the surge by steering users to sources of “credible, authoritative information.” Twitter hopes this “pre-bunk” program will help the company get ahead of false narratives about climate before they have the chance to proliferate across the platform. Rather than debunking disinformation that has already spread, preventative pre-bunks work to promote user awareness of false and misleading information and help them to spot misinformation on their own.

THE RISE OF COMMERCIAL CYBER SURVEILLANCE: new Atlantic Council report examining arms fair and trade show attendance over the past two decades finds a “clear globalization trend” in the sale of cyber surveillance tools by private vendors. Moreover, some of these companies have shown little discretion in their choice of state clients. In addition to this trend’s geostrategic implications, the transnational cyber surveillance industry threatens to supercharge repression, as underscored by the recent Pegasus leaks. These revelations may have caught the attention of the U.S. Department of Commerce, which last week placed NSO Group and three other cyber surveillance players on its Entity List. Writing for NED’s Center for International Media Assistance, Samuel Woodhams has explained how commercial spyware products induce self-censorship and impede the work of journalists.

HOW SHOULD DEMOCRACIES COUNTER DIGITAL AUTHORITARIANISM? As initiatives to regulate large tech platforms, security concerns over foreign vendors, and conversations about online harms intensify, analysts are questioning the future of the open, global internet. In places like the United Kingdom, the United States, and India, Graham Webster and Justin Sherman detect a “tilt away from techno-globalism.” Yet, these authors contend that the realities of technological interdependence, the principles of democratic openness, and the worldwide impact of U.S.-based companies all underscore the importance of a global perspective in promoting democratic digital approaches. Arindrajit Basu, in his recent analysis of India’s policy vis-a-vis PRC tech, also argues that democracies must “resist their own domestic inclinations to adopt repressive principles” as they look to counter foreign authoritarians.

GLOBAL INSIGHTS ON COVID & THE INFORMATION SPACE (MORE COMING SOON!): Earlier this year, the International Forum published a collection of eight essays on “COVID-19 and the Information Space: Boosting the Democratic Response.” This collection of global insights spanned a range of topics—media sustainability, authoritarian influence, fact-checking, research partnerships, data privacy—affecting regions like Europe, North and South America, and Africa. In the coming weeks, the Forum will be publishing essays from our next Global Insight series focusing on innovative civil society responses to disinformation.

Thanks for reading Digital Directions, a biweekly newsletter from the International Forum. 

Sign up  for future editions here!