Manufacturing Deceit: How Generative AI Supercharges Information Manipulation

June 20, 2024
10:00 am - 11:00 am

Manufacturing Deceit: How Generative AI Supercharges Information Manipulation // July 23

by John Engelken

As generative AI becomes more sophisticated and wide-spread, authoritarians are using these tools to undermine democracy around the world. In June, the Forum published a new report by Beatriz Saab, and launched it at an online event featuring the author with additional commentary by Nighat Dad (Digital Rights Foundation), Vittoria Elliott (WIRED), and Adam Fivenson (International Forum for Democratic Studies). The report assesses how authoritarians are using generative AI to advance malign narratives and break down the concept of a shared truth that lies at the heart of social trust and democratic institutions. It also describes the ways in which civil society organizations have begun to use generative AI tools to push back against authoritarian distortions in the information space. 

Author Beatriz Saab writes, “[I]t is clear that authoritarian powers are experimenting and incorporating generative AI into their strategies to undermine democracy.” 

Authoritarian regimes in Russia, China, Iran, and elsewhere actively seek to undermine trust in democracy worldwide, and critical changes to the information environment are aiding their efforts. Saab notes that “the growth of generative artificial intelligence (gen AI) is among the most important of these changes, reducing the cost, time, and effort required by authoritarian actors to both mass-produce and disseminate manipulative content.” In turn, authoritarians use these technological capabilities to advance malign narratives, exacerbate social divisions in democracies, and undermine the concept of a shared and knowable truth. 

Manipulative uses of gen AI are particularly prominent in election contexts where their usage has been tracked “around nearly every national election since mid-2023.” Though experts still grapple with the true impact of these technologies’ use over election outcomes, their presence is undeniable. Gen AI technologies allow authoritarian regimes to produce “high fidelity synthetic media at low cost and effort” while automating the “production and delivery of manipulative content at scale.” They can even individually tailor the dissemination of this content for higher impact.  

As Saab warns, when taken together, these capabilities “simultaneously weaken public trust in online content, society, and democracy itself.” 

The consequences of Gen AI’s use by authoritarian actors extends beyond the context of elections. At the online launch event for this report, Nighat Dad expressed concern about the societal divisions this technology can cause, explaining “In deeply patriarchal societies, we cannot ignore how gen AI unequally targets female political candidates.” She also highlighted the privacy concerns—specifically over how these AI models were trained—that are understudied and inadequately addressed.  

Still, the author argues that civil society organizations have the “opportunity to use gen AI to support the integrity of the information space” and “promote democratic discourse during election campaigns.” Although broadly adopted ethical standards in the use of gen AI systems are absent, democratic reformers should still explore the use of these tools to enhance fact checking, streamline journalistic practices that allow for more timely news reporting (as well as investigation), and improve AI detection efforts—provided democratic actors are transparent about this technology’s use and accuracy. 

Furthermore, during the online event, Vittoria Elliott observed that gen AI is a tool with broad application in the democratic space. She highlighted how Belorussian opposition politicians and prodemocracy reformers had used this technology from outside the country exemplify how “opposition politicians—especially those in exile—can use outreach and satire to connect with voters without assuming personal risk in closed societies.” 

Despite the ongoing complexities regarding the use of gen AI technologies, Saab believes that civil society and other prodemocratic reformers should “harness the power of gen AI to resist authoritarian manipulation.” She also argues that it is critical to study this technology and its use in greater detail to better prepare democratic actors to accelerate and amplify its efforts to protect information spaces and global democracy.  

For more on this topic, read the Forum’s latest report, “Manufacturing Deceit: How Generative AI Supercharges Information Manipulation” or view the launch event featuring remarks from Beatriz Saab, Nighat Dad, Vittoria Elliott, and Adam Fivenson. 

Share