A new report by Beth Kerley (International Forum for Democratic Studies) analyzes priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing artificial intelligence (AI). The report is based on conversations from a private Forum workshop in Buenos Aires, Argentina that brought together Latin American and global researchers and civil society practitioners.
With recent leaps in the development of AI, we are experiencing a seismic shift in the balance of power between people and governments, posing new challenges to democratic principles such as privacy, transparency, and non-discrimination. We know that AI will shape the political world we inhabit–but how can we ensure that democratic norms and institutions shape the trajectory of AI?
Drawing on global civil society perspectives, this report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles– from misleading narratives to government opacity to gaps in technical expertise–that hinder democratic engagement on AI governance, and explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies.
On October 24th, the International Forum for Democratic Studies hosted a virtual event launching the new report “Setting Democratic Ground Rules for AI: Civil Society Strategies.” The Forum’s Beth Kerley shared key findings from the new report. Natalia Carfi, of Open Data Charter, and Eduardo Carrillo of TEDIC, provided comments and shared further insights on opportunities for promoting democratic approaches to AI. The discussion was moderated by Ryan Heath of Axios. Watch the event recording on NED’s YouTube channel.
Advances in artificial intelligence (AI) are transforming political landscapes, impacting how people exercise their rights, and presenting new challenges to democratic principles such as privacy, transparency, accountable governance, and non-discrimination. Democratic AI governance is critical, yet significant barriers to engagement in this area remain. Drawing upon conversations from a private workshop in Buenos Aires, Argentina, the International Forum for Democratic Studies compiled an overview of eight key challenges to and opportunities for the democratic governance of AI.
- AI technologies reflect the human choices and structures behind them. The wide range of technologies described by the term “AI” are shaped by human choices about design and deployment, as well as the social and political contexts that feed into training data. Like all human products, they must be open to challenge by democratic activists and institutions.
- The risks and harms associated with AI challenge traditional assumptions. These impacts can arise at all stages of the AI pipeline, from development to procurement to use, and they may demand new ways of thinking about issues like data protection.
- Opacity around AI hinders democratic engagement. AI systems from surveillance cameras to social-media algorithms already work in the background of our daily lives, and the institutions that deploy them often prefer not to share the details. This reluctance, as well as the inherent complexity of AI systems, can make it hard to map the impacts of these tools.
- Addressing AI impacts will require more than just technical expertise. Because AI risks and harms have social and political roots, they will also require social and political responses. Moreover, these responses may sometimes demand trade-offs between competing democratic values.
- Democracies must close institutional gaps and widen participation in AI governance. Democratic institutions remain broadly unprepared to manage AI harms. Technical expertise on AI is concentrated in the private sector, which places democracies and their publics at a disadvantage in key decision-making processes—many of which exclude civil society and marginalized communities.
- New mechanisms and enduring democratic principles both have important roles to play. Democratic governance of AI may require building specialized institutions, but it also hinges on finding ways to apply existing democratic laws and principles effectively when AI tools enter the picture.
- Tech expertise within civil society can help influence the trajectory of AI technologies. Cutting-edge civil society groups are leveraging their technical skills to pinpoint government or corporate systems’ vulnerabilities; model more inclusive, representative, and responsible approaches to design; and develop AI tools to support civic accountability activities.
- The complexity of AI governance makes cross-sectoral collaboration crucial. AI governance challenges cut across traditional sectoral boundaries. New partnerships and knowledge-sharing initiatives that bring together digital rights groups, traditional human rights groups, journalists, trade unions, teachers, and others can enable civil society organizations to address these issues more effectively.
“Setting Democratic Ground Rules for AI: Civil Society Strategies” by Beth Kerley is a report produced by the National Endowment for Democracy’s International Forum for Democratic Studies.
More On Emerging Technologies from the International Forum:
The Digitalization of Democracy: How Technology is Changing Government Accountability, a report featuring insights from Krzysztof Izdebski, Teona Turashvili, and Haykuhi Harutyunyan
Smart Cities and Democratic Vulnerabilities, a report by Beth Kerley, Roukaya Kasenally, Bárbara Simão and Blenda Santos
The Global Struggle Over AI Surveillance: Emerging Trends and Democratic Responses, the first report in the Forum’s “Making Tech Transparent” series, by Steven Feldstein, Eduardo Ferreyra, and Danilo Krivokapic
Digital Directions a curated newsletter sharing insights on the evolving relationships among digital technologies, information integrity, and democracy.
Power 3.0 blog posts: “Putting a Thumb on the Market: The Rise of State-Aligned Platforms from Repressive Contexts” by Allie Funk, “Xi’s Pitch to the Global South on Technological Governance” by Kenton Thibaut, and “Bridging the Gap Between the Digital and Human Rights Communities” by Eduardo Ferreyra
Power 3.0 podcast episodes: “Digitalization and Democracy in Mauritius: A Conversation with Roukaya Kasenally” and “Can Democratic Norms Catch Up with AI Surveillance? A Conversation with Vidushi Marda”