Digital Directions: July 2024

By Maya Recanati | Edited by Beth Kerley and Adam Fivenson

Bimonthly insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy. If you like this newsletter, share it on social media or forward it to a friend.

SHARE ON FACEBOOK | SHARE ON TWITTER | share on linkedin | FORWARD TO A FRIEND

How Generative AI Supercharges Information Manipulation

by John Engelken, Senior Editorial Coordinator, International Forum for Democratic Studies

“[I]t is clear that authoritarian powers are experimenting and incorporating generative AI into their strategies to undermine democracy” on a global scale, according to Beatriz Saab.

In June, the Forum published a new report authored by Saab, which assesses how authoritarians are using generative AI to advance malign narratives that divide open societies and to undermine the concept of a shared truth which lies at the heart of democratic institutions. In response, civil society organizations and journalists have begun to use generative AI tools to push back against authoritarian information manipulation in public discourse.

Regimes in Russia, China, Iran, and elsewhere actively seek to undermine trust in democracy worldwide, and critical changes to the information environment are aiding their efforts. Saab notes that “the growth of generative artificial intelligence (gen AI) is among the most important of these changes, reducing the cost, time, and effort required by authoritarian actors to both mass-produce and disseminate manipulative content.” In turn, authoritarians use these technological capabilities to advance malign narratives, exacerbate social divisions in democracies, and undermine the concept of a shared and knowable truth, without which democratic institutions such as the rule of law, free and fair elections, and human rights cannot be sustained.

Manipulative uses of gen AI are particularly prominent in election contexts where they have been tracked “around nearly every national election since mid-2023.” Though experts still grapple with the true impact of these technologies’ use over election outcomes, their presence is undeniable. Gen AI technologies allow authoritarian regimes to produce “high-fidelity synthetic media at low cost and effort” while automating the “production and delivery of manipulative content at scale.” Furthermore, the capability to individually tailor the dissemination of targeted content may enhance the impact of information campaigns. As Saab warns, when taken together, these capabilities “simultaneously weaken public trust in online content, society, and democracy itself.”

At the online launch event for this report, digital rights advocate Nighat Dad expressed concern about the impact of gen AI on recent elections, using Pakistan as an example: “During the Pakistani election, generative AI was used to manipulate political campaigns through convincing deep fake videos and audio clips that spread misinformation about candidates. . . the highly targeted nature of AI-generated content exploited voter biases and charged emotions, complicating the already-complicated political and electoral landscape in Pakistan.”

Conversely, Saab argues that civil society organizations have the “opportunity to use gen AI to support the integrity of the information space” and “promote democratic discourse during election campaigns.” Broadly adopted ethical and democratic standards for the use of gen AI systems are still being developed. Still, democratic reformers can use these tools to enhance fact checking, streamline journalistic practices that allow for more timely news reporting and investigation, and improve media monitoring and AI detection efforts. Yet any effort to use these tools must be transparent and carefully directed to avoid their misuse.

During the online event, WIRED magazine’s Vittoria Elliott observed that gen AI is a tool with broad application in the democratic space. She highlighted how the Belarusian opposition had used gen AI technology to “create” an AI candidate for political office that espoused prodemocracy views—the only “candidate” to safely do so during the election. This effort to promote the candidacy of a fake, prodemocracy candidate in Belarus’s recent elections highlighted political repression in the country and showed how “opposition politicians—especially those in exile—can use outreach and satire to connect with voters without assuming personal risk in closed societies.”

For Saab, civil society and other prodemocratic reformers should “harness the power of gen AI to resist authoritarian information manipulation.” It is critical to study this technology and its use in greater detail to better prepare democratic actors to accelerate and amplify their efforts to secure the integrity of the information space.

For more on this topic, read the Forum’s latest report, “Manufacturing Deceit: How Generative AI Supercharges Information Manipulation” or view thelaunch event featuring Saab, Nighat Dad, Vittoria Elliott, andAdam Fivenson.

MORE FROM THE FORUM

Automated Decision-Making and Democratic Norms

Check out this article in Tech Policy Press from the Forum’s Maya Recanati examining automated decision-making and AI in social services and the risks to democratic norms. Read the full article here.

The State of Deployment of Surveillance Tech in Africa

This report from Paradigm Initiative documents how the diffusion of surveillance technologies, such as mobile spyware and facial recognition, throughout Africa is threatening fundamental rights such as privacy. The report identifies regional trends shaping the deployment of surveillance technologies—for example, in West Africa, political instability and security threats have sped up government adoption of surveillance tools. In response, civil society and others can provide legal support to targets of unauthorized surveillance, engage in constructive dialogue with stakeholders, and raise public awareness. Read the full report here.

Microsoft Bing’s Censorship in China

New research from Citizen Lab finds that Microsoft censors its Bing translation results more than comparable domestic services in China, including Baidu Translate and Tencent Machine Translation. During tests, Bing censored entire outputs of text containing sensitive terms. On the other hand, similar Chinese services only censored triggering portions of the text. As the article notes, censoring translations can stifle dissent and block the global spread of information. Bing remains the only major foreign translation service and search engine still available in China. Read the full article here.

Using AI to Inform Policymaking

A new report from the AI4Democracy project at IE University explores how AI can be used for collective decision making and policymaking. The study examined the Talk to the City platform, which solicits, analyzes, and organizes public opinion, and its applications across different aspects of deliberative decision making—for example, to find shared principles. Researchers concluded that large language models (LLMs) are particularly helpful in summarizing large-scale deliberations; identifying overarching themes from broad discussions; and producing reports that efficiently communicate context. Read the full report here.

MORE FROM THE NED FAMILY

The Global Expansion of PRC Surveillance Technology

A new report from the International Republican Institute analyzes how the increasing diffusion of PRC surveillance technologies around the world is helping Beijing achieve important policy goals that may undermine democratic principles. Read the full report here.

In Sub-Saharan Africa, China Embraces the Kremlin’s Messaging

Research from DFRLab documents the PRC’s efforts to amplify pro-Russian narratives in sub-Saharan Africa through content sharing agreements, PRC-aligned commentators, social media platforms, and broadcasting infrastructure. Particularly since the invasion of Ukraine, the PRC has assumed a more aggressive role in promoting Russian content as part of its strategy to achieve “discourse power” across the continent. The growing collaboration between the two states threatens the integrity of African information ecosystems and undermines global support for Ukraine. Read the full report here

UN Global Principles for Information Integrity

The UN’s recently released Global Principles for Information Integrity outline a framework to guide multistakeholder action for a healthier information ecosystem. The document outlines five principles—societal trust and resilience; independent, free, and pluralistic media; transparency and research; public empowerment; and healthy incentives—that are central to strengthening information integrity. Additionally, the document provides targeted recommendations for stakeholders, including civil society, to operationalize the five principles, such as promoting open access initiatives; upholding integrity and ethical standards; and collaborating across geographies and contexts. Read the principles here

The Increased Role of Generative AI in Russia’s Propaganda

NATO’s Strategic Communications Centre of Excellence documents the increased role of generative AI in the Kremlin’s global information operations. The Centre’s recent research identified pro-Kremlin, coordinated groups using generative AI, for example, to generate content and cross-reference it across platforms. The report highlights 17 coordinated groups of accounts using AI to reduce the production and distribution costs of information operations. These networks carry anti-Ukraine narratives, framing NATO in the context of colonialism and Russia as a victim. Read the full report here.  

MORE FROM THE NED FAMILY

How Taiwan Should Combat China’s Information War

In this Journal of Democracy article, Tim Niven of Doublethink Lab delves into CCP information manipulation in Taiwan and how local civil society actors can push back. Read the full article here.


Thanks for reading Digital Directions! If you enjoy this newsletter, forward it to a friend or share on social media so that others can subscribe.

 

Share