Digital Directions: October 14, 2021

Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.
Today, the International Forum for Democratic Studies is launching a new, specialized newsletter that examines authoritarian manipulation and democratic resilience in the digital domain. Our analysis will focus on how democracy is being impacted by disinformation and influence campaigns; the choices made by states and platforms about governing the digital public square; and the challenges and opportunities presented by emerging technologies. We will look at how illiberal actors are exploiting digital technologies and distorting the information space, and what democratic governments and civil societies are doing to craft democratically accountable digital models. Each week we will focus on one Big Story where we will analyze and unpack some of the biggest trends in this space. If you enjoy newsletter, forward it to a friend or share on social media so others can subscribe.

– Christopher Walker, Vice President for Studies and Analysis, National Endowment for Democracy

  • Leaked documents from Facebook shine a light on global moderation shortfalls and the continuing success of online trolls.
  • Russia is the latest government to push back against online platform takedowns of regime-linked content.
  • Democratic and authoritarian governments pursue divergent responses to the challenge of regulating facial recognition.

What a Changing Internet Means for Democracy

How should digital spaces be governed? This thorny question is growing ever more contentious as online platforms assume an increasingly central role in public discourse; as the Internet of Things dissolves earlier boundaries between physical space and cyberspace; and as machine learning and big data analytics offer governments and corporations alike new ways to leverage the data they collect from both. Alarmed by the power of giant global tech companies and in some cases by antigovernment speech and mobilization online, governments of all stripes are passing new rules intended to bring cyberspace under their authority. Traditional notions of a borderless and loosely regulated digital realm are under siege.

As Freedom House’s 2021 Freedom on the Net report warns, the “global drive to control big tech” poses serious risks for human rights. Illiberal regimes are using regulations to quash nonviolent expression and facilitate state surveillance. The rules they exploit include data localization requirements (forcing platforms to store local users’ data in-country) as well as the “fake news” laws profiled in a recent Forum Power 3.0 blog post. It is no coincidence, then, that the global trend toward greater platform regulation has occurred alongside what Freedom House calls an “unprecedented assault on free expression online.”

While the contours of the illiberal danger are growing clearer, the path toward a prodemocratic response remains foggy. Not all regulations are created equal in their impact on civic rights. Freedom on the Net authors Adrian Shahbaz and Allie Funk hold out hope that proposed requirements for large tech companies to be more forthcoming about their practices could “mitigate online harms while bolstering transparency and accountability.” Positive regulatory models, they argue elsewhere, should “prevent power from accumulating in the hands of a few dominant players, whether in the private sector or the state.” In a report for NED’s International Forum for Democratic Studies, Nicholas Wright has contended that regulating both state and corporate handling of data is an important facet of democratic digital sovereignty.” Ranking Digital Rights and others argue that legal privacy protections could have broader knock-on effects in combatting online harms, without posing the same human rights quandaries as content regulations.

Still as democratic governments turn inward to thrash out complex questions of technological governance, they would do well to remember that even well-intentioned regulatory models may present some dangers when transferred to illiberal settings. With autocrats stepping up their campaigns to shape cyber norms and standards, what can democrats do to fortify the global guardrails against those who would wield both technology and law as tools for crushing opposition?

Here, two recent publications on democratic digital cooperation and the future of the open internet highlight another critical point: more transnational collaboration is key. Facing the spread of an authoritarian digital model promoted by the People’s Republic of China democratic states have remained split by conflicting interests and principles. Whether they can unite in defense of core common values may help to determine whether international institutions place a brake on digital repression, or instead accelerate a global slide toward tech-enhanced authoritarianism.

Elizabeth Kerley, International Forum for Democratic Studies

ARE FINANCES THE KEY TO FIGHTING DISINFORMATION? Although more countries around the world are adopting policy measures to counter disinformation, new research from Global Disinformation Index (GDI) shows that a regulatory gap remains around the ads that fund disinformation sites and content. Current regulations focus on issues such as electoral disinformation and hate speech, but neglect the financial incentives that can drive disinformation efforts. Of the 12 countries surveyed by GDI, only Australia has a voluntary “Code of Practice on Disinformation and Misinformation” that aims to disrupt the monetizing of disinformation. The EU Commission likewise has presented an update to its Code of Practice on Disinformation that will encourage platforms, online advertisers, and adtech firms to ensure ads do not fund disinformation. Platforms including Clubhouse, Vimeo, and DoubleVerify recently announced they will be signing onto the European Code, joining Facebook, Google, Twitter, and TikTok, among others. Still, such voluntary measures lack meaningful enforcement. GDI argues that a “common ‘regulatory floor”” with regard to advertising and monetization is needed to ensure a “coordinated, inter jurisdictional response to demonetise disinformation.”

WHY TROLL FARMS STAY ON TOP AT FACEBOOK: leaked internal report from Facebook shows how troll farms, which pay people to post provocative and often false content, have exploited the platform’s recommendation algorithm to gain views. When data scientist Jeff Allen penned the memo in October 2019, troll farms in Southeastern Europe were behind 19 of the top 20 religious Facebook pages, and 10 of the top 15 African-American Facebook pages. Facebook’s recommendation system enabled troll farm content to reach 140 million Americans per month—even though only 25% of them followed any of the pages. As whistleblower Frances Haugen explained, Facebook’s algorithm promotes content that gets clicks, exposing other users to material they are not choosing to see. The leaked memo suggests that troll farms might be less successful if Facebook instead prioritized a “graph-based authority measure,” which uses citations to determine how authoritative pages are, or imposed stiffer penalties for copying content from elsewhere (a strategy used by troll farms to win large followings even when they do not know their audiences well enough to convincingly produce original content).

HOW SHOULD WE BE RESEARCHING INFLUENCE OPERATIONS?: In a recent Lawfare article, Alicia Wanless assesses how the study of influence operations has evolved since 2016 and what needs to change in order to improve the field. While interest in influence operations has grown substantially, some major roadblocks to collaborative research remain. These challenges include a lack of common definitions for key terms; distrust of industry by researchers; and a lack of adequately developed frameworks for platform transparency reporting and data-sharing. Wanless suggests that developing clearer rules and mechanisms in these areas, preferably at the international level, might enable a more coordinated response to influence operations. One key issue that she identifies is the need for stable, coordinated, and strategic long-term funding, a problem that has also been identified by multiple surveys of civil society organizations conducting counter-disinformation and influence operations research.

POPULISTS AND AUTOCRATS PUSH BACK AGAINST PLATFORM TAKEDOWNS: The Kremlin has threatened to block YouTube after the site removed two German-language channels used by Russian state broadcaster RT amid a clampdown on health misinformation. Russia, which last December adopted a new law addressing platforms that “discriminate” against Russian mass media, joins the growing ranks of countries that have sought to punish platforms for taking down government-sponsored or progovernment content. Recently, Brazilian President Jair Bolsonaro signed a decree, later blocked by the country’s Senate and top court, that would have banned most types of takedowns unless platforms obtained court approval. Similarly, the Ugandan government ordered the blocking of social media platforms and messaging services in January 2021, days before general elections, after Facebook removed a state-linked network over coordinated inauthentic behavior. Political leaders in MexicoPoland, and elsewhere, many backed by populist movements, have voiced a more general interest in limiting platform takedowns.

FACEBOOK’s GLOBAL MODERATION EFFORTS FALL SHORT: Despite Facebook employees flagging posts by drug cartels, human traffickers, and groups inciting ethnic violence, the company’s systems have frequently failed to stem these abuses. Internal documents reviewed by the Wall Street Journal (WSJ) underscore continuing difficulties caused by a lack of resources for monitoring content outside the United States, and particularly in the developing world. These problems have persisted despite new efforts to address “at-risk” countries since 2018, in part because Facebook lacks content reviewers and content-moderation algorithms able to work effectively in many languages. Language has been a barrier for Instagram in its Arabic-language content moderation efforts, as well as to the removal of Facebook content inciting violence against the Tigrayan minority in Ethiopia. In a December document leaked to the WSJ, Facebook employees warned that the platform’s recent integrity work “doesn’t work in much of the world.”

CANADA DEBATES ONLINE HARMS LEGISLATION: Public consultation over proposed social-media legislation in Canada has reignited long-running debates over balancing privacy and free expression with the need to counter online abuses. The government proposal would establish a new Digital Safety Commission and require platforms to take down flagged content falling under categories already illegal in Canada (including terrorist content, hate speech, and child exploitation content) within 24 hours. A consultation submission by Ranking Digital Rights (RDR) criticizes the draft for granting regulators too much power and warns it would induce platforms to put in place “proactive monitoring tools” that will undermine user privacy. In RDR’s view the proposed legislation “falls into the trap of focusing exclusively on the moderation of user-generated content” without addressing the deeper issue of platform business models. The Canadian Civil Liberties Association similarly cautions that the 24-hour takedown requirement would undercut free expression, with moderators incentivized to remove content whether or not it is clearly illegal. These submissions echo concerns that have arisen around proposed or extant social media legislation in other longstanding democracies, including Germany and the United Kingdom.

DEMOCRACIES GRAPPLE WITH FACIAL RECOGNITION: In open societies, criticism of facial recognition technology and calls for its regulation are mounting. Last week, members of the European Parliament voted in favor of a ban covering both law enforcement use of facial recognition in public spaces, and facial recognition databases held by private companies such as Clearview AI. In the U.K., activists and journalists are questioning the handling of facial verification data collected by a private vendor when citizens access the National Health Service app. Others have criticized plans by London’s law enforcement officials to expand their use of Retrospective Facial Recognition technology, which scans back through old images and video footage. In both these cases, critics are concerned about a lack of public access to information on the technology’s use. In the United States, analysts warn that the expanded use of facial recognition to verify eligibility for government benefits during the pandemic risks shutting out members of marginalized groups.

A FAST TRACK FOR AI SURVEILLANCE IN RUSSIA: While Russia has faced similar debates around the rapid spread of facial recognition systems, regulators there may be less inclined to curb their use.  Human Rights Watch (HRW) reports that an AI law passed by the Russian parliament in April 2020 authorizes city authorities “to test new technology, including facial recognition, free of most of the personal data legislation restrictions.” Alongside a planned upgrade to the city’s facial recognition system, a second type of biometric surveillance known as silhouette recognition may soon enable authorities to track individuals even without face data. The city’s face recognition technology has been a bone of contention between officials and digital rights activists (one of whom sued after successfully purchasing surveillance records of her own movements for $200 on the black market). However, city officials reportedly insist the information collected by Moscow’s cameras is not covered under personal data protection regulations.

In a new Power 3.0 blog post, “The Information Manipulation Risk from Audio Based Group Chat Apps,” Daniel Cebul explains the unique disinformation risks and content moderation challenges posed by audio-based social media apps. Authoritarian actors are exploiting these vulnerabilities to spread disinformation and target opposition groups. To overcome authoritarian manipulation of this rapidly growing communication landscape, Cebul advises civil society organizations, audio-chat platforms, researchers, and policymakers to collaborate on developing content moderation tools that will both counter disinformation efforts and protect user privacy.

On Tuesday, October 19 at 9:00 am EDT, the International Forum for Democratic Studies and the Carnegie Endowment for International Peace will hold a joint online event entitled “Digital Repression: Confronting the Evolving Challenge.” Three members of Carnegie’s Digital Democracy Network—Arindrajit Basu (Centre for Internet and Society), Irene Poetranto (the Citizen Lab), and Jan Rydzak (Ranking Digital Rights)—will discuss how the digital repression landscape is changing in light of recent technological advances, shifting global trends in tech governance, and the digital fallout from the COVID-19 pandemic. Carnegie interim president Thomas Carothers, NED vice president Christopher Walker, and Carnegie senior fellow Steven Feldstein will offer remarks. For details and to register, please visit our website.

Thanks for reading Digital Directions, a biweekly newsletter from the International Forum. 

Sign up  for future editions here!

 

Share