Digital Directions: November 2023

Bimonthly insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy. If you like this newsletter, share it on social media or forward it to a friend.


Image Credit: LuckStep/shutterstock

Setting Democratic Ground Rules for AI

by Beth Kerley, Senior Program Officer, International Forum for Democratic StudiesThis Big Story was adapted from Beth Kerley’s recently published report for the International Forum for Democratic Studies, Setting Democratic Ground Rules for AI: Civil Society Strategies

Advances in artificial intelligence (AI) are changing the playing field for democracy. Since social media’s emergence as a tool of protest, commentators have regularly stressed how our evolving technological landscape is transforming our political world. The digital tools on which we rely help to determine how people express themselves, find like-minded communities, and initiate collective action. As the International Forum has tracked in our “Making Tech Transparent” series examining AI surveillancesmart cities, and the digitalization of governance, these technologies also affect how governments monitor people, administer services, and—in some cases—dole out repression.We are poised for another seismic shift in the balance of power between people and governments. With recent leaps in the development of large language models (LLMs), the global proliferation of AI surveillance tools, and growing enthusiasm for the automation of governance processes, technological advances are already challenging democratic systems. Recognizing that digital advances themselves do not necessarily work to democracy’s benefit, prodemocratic stakeholders must work to erect guardrails around AI development and deployment; ensure consultation with communities whose democratic rights may be impacted by AI systems; and chart development trajectories that infuse AI technologies with democratic values. The consolidation in dictatorships of authoritarian models for the integration of AI—which reject privacy, popular input, and rights-based frameworks in favor of top-down-control—heightens the urgency of this task. The following analysis, drawn from contributions at the Forum’s May 2023 workshop “Closing Knowledge Gaps, Fostering Participatory AI Governance: The Role of Civil Society” in Buenos Aires, Argentina, presents some initial reflections from expert stakeholders—chiefly within the digital rights and open government communities—about the present state of the AI governance landscape and potential avenues for civil society intervention.Several key points emerged from our discussion including. First, stakeholders across government, media, civil society, and the private sector need to see the human agency, relationships, and structures behind AI models—whether the social inequalities that produce biases in data and design, or the political relationships that underpin surveillance deals. Recognizing the human factors and choices that determine how AI systems affect us, rather than seeing these impacts as inevitable, is critical to maintaining democratic accountability for the policy makers, developers, and others who exercise power over and through AI technologies.Second, these contributions underscore the urgency of equipping democratic societies and institutions to keep up with a constantly evolving set of AI harms and risks. The absence of established norms, learning processes, and institutions addressing AI harms and risks creates challenges for both governments seeking to regulate AI and civil society organizations considering how to use the technology responsibly. Even as processes such as algorithmic impact assessments (AIAs) become more institutionalized, the underlying technical landscape is changing.Finally, participants stressed the importance of developing new strategies, processes, and collaborations to give real force to principles such as AI transparency, accountability, and privacy by design. Participants faced serious challenges both in engaging with the private-sector actors responsible for much of the decision-making around AI, and in translating state transparency and accountability mechanisms into meaningful rights protections. Deeper involvement by affected stakeholders, especially marginalized communities, at earlier stages of regulatory and design processes was a recurring demand.As AI use grows more pervasive, our expectations of privacy, access to public goods, and opportunities to challenge injustice from the courtroom to the workplace are likely to increasingly depend on the rules and norms we establish for AI systems. If we are to ensure that these choices reflect the full range of social, civic, and human rights concerns at stake, civil society will have an important role to play in determining how democratic societies use and live with AI.


Deepening the Response to Authoritarian Information Operations in Latin AmericaPlease join the International Forum for Democratic Studies on November 28th at 11:00 am ET for the launch of a new report examining civil society’s response to the intensification of authoritarian information manipulation targeting democracy in Latin America. Featured speakers include Iria Puyosa (Atlantic Council DFRLab), Marivi Marín Vázquez (ProBox), Adam Fivenson (IFDS), and Fabiola Cordova (NED). Journalist and Carnegie Endowment for International Peace Distinguished Fellow Moisés Naím will moderate the discussion. Register for the Zoom Webinar here

Towards Afro-feminist AI: A Handbook for Approaching Governance of AI in Africa

Pollicy outlines an Afro-feminist approach to AI governance and provides guidance for policymakers and industry stakeholders seeking to regulate the technology. Highlighting the ways in which AI development can reinforce asymmetries of power—for example, by relying on exploitative data collection practices in the Global South to train models—authors Bobina Zulfa and Amber Sinha emphasize the need to center democratic principles such as agency, human dignity, privacy, and non-discrimination when crafting AI regulation. Read the report here.

Can we use AI to enrich democratic consultations? 

In October 2023, Rappler launched a project to test how AI can be used to enrich democratic consultations on AI governance by developing an AI-moderated chat room that gathers and synthesizes insights from users to generate policy proposals. Rappler found that while AI-moderated focus groups had more scale potential and were useful in synthesizing inputs, participants generally preferred human-moderated consultations. The models struggled to generate constitutional, enforceable policies and necessitated human intervention for review, highlighting the inadequacies of relying exclusively on AI to conduct public consultations. Read the full report here

Inside the Brain of a Kazakh Smart City

Coda Story takes a closer look at the deployment in Kazakhstan of PRC surveillance technology. Over the past several years, Kazakh authorities have piloted smart city experiments using hardware from PRC-affiliated technology companies. This move toward technocratic governance has met with mixed reactions from local residents. As smart city infrastructure expands across the country, citizens will have to grapple with lingering questions around the technology’s opacity and the handling of private data, concerns highlighted in a 2022 Forum report on smart citiesRead the article here

Donor Principles for Human Rights in the Digital Age

The Freedom Online Coalition’s principles provide guidance for governments investing in international development projects involving digital technologies and data collection. Aiming to increase accountability in a rapidly evolving technological landscape, the twelve principles include suggestions such as supporting the development and implementation of digital government and data management systems; fostering and strengthening coordination between stakeholders; and prioritizing digital inclusion. Read the principles here.


Amplification Among Allies: Russian and PRC Information Operations in Latin America

Maria Isabel Puerta Riera of GAPAC details China and Russia’s growing influence operations in Latin America and civil society’s response in a new blog post for Power 3.0Read the full blog post here.

Russian Influence Campaigns in Latin America

The United States Institute for Peace analyzes how Russian state media, sympathetic Latin American state media, and online influencers have flooded the region’s information space with propaganda and Kremlin-friendly narratives. Using regionally popular rhetoric about anti-colonialism, U.S. overreach, and the need for a new “multipolar world,” Russia has pushed Latin American countries to adopt neutral stances on key political issues, including the war in Ukraine. Authors Douglas Farah and Román D. Ortiz call for greater investment in mapping Russian information operations and in civil society organizations with regional and thematic expertise.  Read the full report here.

Slovakia’s election deepfakes show AI is a danger to democracy

In the days leading up to Slovakia’s September 30th, 2023, parliamentary elections, several manipulated audio clips circulated online of Progressive Slovakia Party leader Michal Šimečka discussing how to rig the election and double the price of beer. Though the audio clips were quickly debunked, their reach and impact remain unclear. This example illustrates how AI-generated content is posing new challenges to democracies by accelerating the spread of false and misleading content, which is particularly concerning ahead of the many national elections in 2024. Read the full article here

How the People’s Republic of China Seeks to Reshape the Global Information Environment

A new report from the Global Engagement Center identifies five elements of PRC efforts to control the information space: the spread of propaganda and censorship through state media and content-sharing agreements with foreign networks; the export of surveillance technology used in “smart cities” and subsequent harvesting of data from foreign publics; the exploitation of international institutions and bilateral partnerships to install Beijing-friendly priorities; the cooption and transnational repression of political figures, academics, journalists, and corporations; and the far-reaching control of Chinese-language media and communication channels. Read the full report here

Digital Responses to Crises: An Action Plan for Platforms and CSOs Confronting Online ThreatsThe National Democratic Institute (NDI) highlights the disconnect between platform actions and on-the-ground actors’ needs. NDI includes recommendations to improve crisis response, such as creating reliable, consistent communication channels with civil society and increasing transparency for accountability and research purposes. Read the full report here

Thanks for reading Digital Directions! If you enjoy this newsletter, forward it to a friend or share on social media so that others can subscribe.