Digital Directions: October 27, 2021

Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.

  • Amid ongoing fallout from leaked internal documents, another Facebook whistleblower sheds light on the platform vulnerabilities that authoritarians exploit.
  • In Russia, China, and beyond, Silicon Valley platforms are recalibrating their responses to mounting authoritarian censorship demands.
  • Officials in the United Nations and the United States have started to think through a rights-based approach to AI.

Image Credit: fanjianhua / Shutterstock.com

How Networked Devices Are Changing the Face of Digital Repression 

While digital repression may once have seemed synonymous with internet shutdowns and censorship, the proliferation of tools that glean and process digital data from the physical world is giving rise to new authoritarian models. At one extreme, such tools can underpin a comprehensive system of networked repression—yet they can also blur the line between convenience and surveillance.

How, for instance, should one classify the recent introduction of a facial recognition payment system in the Moscow metro? The move has drawn alarm from activists and commentators, understandably so given the earlier use of Moscow’s facial recognition cameras to track down protesters. Yet these cameras fall within a trend that encompasses not only authoritarian regimes— facial recognition payment has been used in Kazakhstan and China—but also established democracies. The United Kingdom, for instance, is trialing a similar payment system for school lunches, and facial scans may soon be the way across borders for travelers on the Eurostar. Many Apple users utilize Face ID to authorize payments for everyday items.

When deployed intentionally as an authoritarian governance tool, networked surveillance tools can buttress an integrated system of physical and digital surveillance and control. Examining the “data fusion” platforms available to law enforcement in the People’s Republic of China (PRC) CSET’s Dahlia Peterson explains why the online and offline information streams at their disposal amount to more than the sum of their parts. These platforms piece together data from facial and license plate recognition systems; identifying numbers lifted from cell phones; online activity records; and other state and corporate information in order to see how people are connected and predict behavior. Moreover, Darren Byler’s work on Xinjiang shows how by feeding this information back into a net of biometric surveillance tools and ID checkpoints, local authorities can control residents’ movements.

While this system is an extreme case for the time being, its components are very much for export. As analyst Jonathan Hillman writes, “The same technology that is contributing to the greatest human tragedy of this century may also watch over streets in your city, buildings in your neighborhood, and the living room next door.” Consumers are increasingly outfitting their homes and even themselves with surveillance cameras and other networked objects, and PRC companies are among the leading global vendors of such systems. These companies include Hikvision, which has a state-owned corporation as its largest shareholder and which, Hillman notes, has profited from combining “generous state support at home and low-cost sales abroad.”

But the issues raised by the rapid growth of the Internet of Things (IoT) go beyond questions about vendors’ practices and ties: As Hillman also recognizes, the data streams these devices generate are vulnerable to malicious actors, whether based in Beijing, Berlin, or Beirut. Such data harvesting may threaten not only personal privacy, but also democratic integrity. It could, according to internet scholar Laura deNardis, facilitate “personalized attempts to influence or disrupt [citizens’] political participation.” Lindsay Gorman has similarly argued that IoT data collected by the PRC could serve purposes “from surveilling populations and arresting dissident journalists to tracking intelligence assets, exploiting personal information for kompromat [compromising material—eds.], feeding personal data into AI systems, and developing micro-targeted manipulation for information warfare narratives.”

While scholars, legislators, and activists have begun to suggest safeguards—from use restrictions to increased data security to transparency requirements—that could mitigate these risks, no society yet has a fully developed answer to this emerging challenge. As networked objects collect more information about who we are, where we go, and how we live, democracies must consider how to ensure that these data points do not add up to unprecedented social control.

NOBEL PRIZE WINNER HIGHLIGHTS THE NEXUS OF DISINFORMATION AND VIOLENCE: Maria Ressa, the co-founder of the Philippine digital media company Rappler, was one of two journalists (alongside Russia‘s Dmitry Muratov) to be awarded the Nobel Peace Prize for their work supporting free expression. Since Rodrigo Duterte’s rise to the presidency in 2016, information operations in the Philippines have increased exponentially and contributed to the deaths of human rights defenders. Ressa herself has faced threats, graphic online harassment, and politically motivated legal charges. Rappler recently highlighted how a pro Duterte propaganda network has been using Facebook pages, blogs, and “alternative” news sites to “red-tag” progressive activists as terrorists. Many individuals thus targeted later fell victim to offline violence. In a recent interview, Ressa observed: “When you say a lie ten times truth has a chance of catching up. When you say a lie a million times on social media with algorithmic distribution and the choices these platforms make to actually prioritize the spread of the lies laced with anger and hate over the facts you don’t have a chance.”

DOMESTIC DISINFORMATION IS GROWING IN AFRICA: In an interview with the Africa Center for Strategic Studies, Tessa Knight of the Atlantic Council’s Digital Forensic Research Lab describes several operations that illustrate the recent growth of domestic disinformation campaigns on the African continent. Most of the campaigns Knight investigated focused on political issues like elections in Uganda and armed conflict in the Tigray region of Ethiopia, while a network in the Democratic Republic of the Congo that initially pushed COVID-19 disinformation later shifted its focus to politics. Although these operations built on tactics previously used in Africa by foreign actors like Russia and Saudi Arabia, Knight stresses that many people are “vastly underestimating the ability of political parties and online actors on the continent and how far people are willing to go to push their agenda.” Knowledge of local languages and culture gives domestic disinformation operators an advantage over their foreign counterparts.

REALITY AND PERCEPTIONS AROUND IRANIAN INFLUENCE OPERATIONS: This September, Facebook removed hundreds of accounts and pages associated with coordinated inauthentic behavior in Sudan and Iran, with activity in both cases linked to military organizations. Earlier in the year, Facebook identified Iran as one of the top sources of state-backed disinformation on its platform. Yet after analyzing Iranian operations from 2008 to 2020 that were aimed at the Arab world—a primary target of such operations—Mona Elswah and Mahsa Alimardani found that they drew little engagement from users. Elswah and Alimardani argue that operations of this kind should be termed “perception IOs,” meaning their strength lies in creating the perception that Iran can distort foreign public opinion rather than in actually seeding false narratives. Writing in the Journal of Democracy, Péter Krekó has classified such operations as part of the toolkit of “authoritarian inflation,” which aims to make people view foreign authoritarian states as more powerful than they are in reality.

FACEBOOK WHISTLEBLOWER ON AUTHORITARIAN INFLUENCE TACTICS: Over the past weeks, Facebook has reeled from a series of damaging revelations flowing from leaked internal documents and whistleblower testimony on issues that include human trafficking and incitement to violence on the platform; conflicts between in-house integrity monitors and platform management; and the algorithmic amplification of divisive content. On October 18, former Facebook data scientist Sophie Zhang testified before the U.K. parliament on platform enforcement loopholes that may be helping authoritarian politicians to manipulate discourse. Zhang, who looked into bots as part of Facebook’s Site Integrity team, said it was more difficult to take down fake accounts if these were associated with any political figure, creating “an incentive for major political figures to essentially commit a crime openly.” She also described Facebook as unwilling to devote resources to combatting inauthentic behavior outside a narrow group of wealthy Western states and “foreign adversaries,” a complaint that echoes a recent Wall Street Journal expose based on the leaked documents. The whistleblower said Facebook resisted taking action, for instance, against a Honduran bot network that amplified content supporting President Juan Orlando Hernández. We will be taking a closer look at the contents of the “Facebook Files” in the next edition of this newsletter.

TAKING STOCK OF PLATFORM COMPROMISES AS LINKEDIN LEAVES CHINA: The career networking site LinkedIn announced it will end its service in China due to a “significantly more challenging operating environment,” alluding to a combination of business and censorship concerns. LinkedIn previously stood out in continuing to operate in the PRC market, and acceding to state censorship demands, after other large U.S. platforms departed or were blocked. Nonetheless, regulators reprimanded the networking site this spring for failing to censor adequately, and new data security rules threatened to impose additional burdens. While LinkedIn’s balancing act in China may have come to an end (the platform still plans to offer a more limited, China-specific job-postings service), the compromises between Silicon Valley giants and authoritarian censors are making new headlines in Russia and elsewhere as companies confront data localization requirements, laws on content, and pressures on local staff.

INTERNAL RESEARCH SHOWS TWITTER ALGORITHMS AMPLIFY RIGHT-WING CONTENT: An internal Twitter study analyzed tweets from elected officials in Canada, France, Germany, Japan, Spain, the UK, and the U.S., as well as tweets linking to media outlets that were coded according to political slant. It found that Twitter’s algorithm amplified tweets by elected officials across parties, but tweets from the right receive more amplification in all countries except Germany. Content from media outlets coded as “right-leaning” was also more likely to be pushed to users. Due to the complexity of the machine learning processes that ultimately produce content decisions, Twitter does not know why its algorithms boost right leaning content. One Twitter team director has speculated that right-wing politicians could be better at promoting their views, while other analysts suggest that right-leaning posts might “successfully spark more outrage, resulting in amplification.” Twitter also announced that it will soon begin sharing large datasets, protected by privacy safeguards, that will help independent researchers to dig into internal studies of this kind, an action called for by Renée DiResta in a January 2021 Forum publication.

DEFINING RIGHTS IN THE AGE OF AI: While commentators have long advocated a human-rights based approach to artificial intelligence (AI), official bodies are now considering how such an approach might look in practice. As Elizabeth Renieris outlines, the Office of the UN High Commissioner for Human Rights has released a report assessing what international human rights law might require of states and corporations when it comes to analyzing, regulating, and mitigating AI’s human rights impacts. The report, nominally privacy-focused but also considering a range of other rights, examines use areas including criminal justice, public services, and content moderation. Meanwhile in the United States, the White House Office of Science and Technology Policy announced plans to open discussion on an AI “bill of rights,” which might address issues such as auditing AI systems for biases; informing people when they are the subjects of AI decision making; and offering recourse to those unjustly harmed by such decisions. These proposals join the European Union’s draft AI Act, under discussion since this spring, which describes safeguarding fundamental rights as among its main objectives.

AN OUT OF THE BOX MODEL FOR CENSORSHIP? recent investigative report by the New York Times underscores how new technical capacities are shifting the balance in Russia’s struggle to rein in foreign tech companies. A few years ago, analysts such as Jaclyn Kerr and Robert Morgus argued that Russia represented an alternate model of internet control—relying heavily on surveillance, hacking, legal and extra-legal harassment, and influence operations—that could appeal to governments lacking a PRC-style “Great Firewall.” Now, Russian authorities have complemented this model with a technical approach that works on top of the country’s comparatively decentralized telecoms infrastructure: deep packet inspection (DPI) “black boxes” installed on the systems of internet service providers per a 2019 law. These traffic-filtering “black boxes,” for instance, enabled Moscow to slow down Twitter access in Russia when the service failed to comply with takedown requests. Former State Department official Laura Cunningham warns that this approach “can quickly and easily be replicated by other authoritarian governments.”

“No Safe Haven”: Commercial Spyware’s Global Reach (Center for International Media Assistance): For the Center for International Media Assistance, Samuel Woodhams details how undemocratic actors weaponize commercial spyware to crack down on independent journalists based on his recent report, “Spyware: An Unregulated and Escalating Threat to Independent Media.” State actors exploit surveillance technology to monitor journalists’ phone calls, location, and photographs. Moreover, companies marketing this spyware are often based within liberal democracies, and state actors in democracies have done little to regulate their export. Woodhams calls for the implementation of ethical standards in the market, to ensure that such technology cannot be used to suppress free media expression.

Thanks for reading Digital Directions, a biweekly newsletter from the International Forum. 

Sign up  for future editions here!

 

Share