The GLobal struggle over ai surveillance
Through several key advances that enable approaches like facial recognition, social media monitoring, and smart policing techniques, AI technology is extending the power of states to monitor citizens. While entrenched autocracies make eager use of these new capacities, more open political systems also incorporate AI surveillance tools, raising troubling questions about the impact on due process, free expression, and active citizenship. Furthermore, and in the context of global democratic backsliding, unregulated AI surveillance threatens to widen gaps in the rule of law, tilting the playing field toward illiberal governments in settings where checks and balances are already weakened.
My new essay for the International Forum for Democratic Studies’ report “The Global Struggle Over AI Surveillance: Emerging Trends and Democratic Responses” shows how surveillance risks extend across regime types. Beijing uses both biometric surveillance and social media monitoring to create an integrated system of physical and digital control in Xinjiang. Although this example represents an extreme case, the potential for surveillance breakthroughs to subvert privacy, facilitate political persecution or group discrimination, and erode freedoms of expression and association is not unique to autocracies. The risks that advanced surveillance technologies pose are particularly acute in weak democracies and hybrid regimes—“swing states,” here defined according to V-Dem’s global democracy scores, where political environments remain partly open but key liberal-democratic guardrails are weakened or absent in ways that could heighten the appeal of authoritarian digital models.
AI surveillance technology is increasingly accessible, especially as its cost comes down and relevant components become more affordable. Companies based in the People’s Republic of China (PRC) are at the forefront of this market, honing these technologies at home and exporting AI surveillance tools to governments worldwide.
Yet companies based in OECD countries also contribute actively to this growing market, selling predictive policing software, facial recognition algorithms, and social media surveillance applications to both democratic and authoritarian clients, including the PRC. State-led measures to combat the COVID-19 pandemic have only heightened demand for these technologies. There is a risk that erosions of data privacy and use of increasingly invasive surveillance measures will persist beyond the pandemic.
The lack of broad, global norms to govern this technology’s implementation and operation heightens the risk of misuse of AI surveillance tools around the world. Though multilateral fora have made progress in establishing agreements on high-level AI ethical principles, it is unclear how governments or companies will instill these concepts in the development and deployment of AI systems.
Civil society organizations (CSOs) have a critical role to play in shaping AI surveillance practice and policy—particularly as authorities are often inclined to make decisions on these issues in the dark, overlooking human rights principles and other social concerns. CSOs can build awareness about this issue, encourage public debate about the contours of AI surveillance projects, and monitor these initiatives for signs of abuse.
Despite recent progress in addressing this issue, safeguards to rein in abuses of AI surveillance tools remain elusive. Governments worldwide should be more transparent about how they use AI technology. They must also move beyond promulgating high-level AI ethical principles and establish concrete benchmarks and regulations for responsible AI use that reflect international human rights law and standards. Moreover, governments should encourage more effective oversight to better ensure appropriate assessment of surveillance impacts, engaging with outside researchers and CSOs at all stages of this process. Finally, they should work to establish an enduring multistakeholder body mandated to tackle a wide array of surveillance issues. Private companies also need to be more proactive in assessing and addressing human rights impacts.
The PRC is moving rapidly to write the rules for AI systems, and democracies should push back to develop frameworks that ensure responsible use of these technologies. Democracies must define regulatory norms to guide responsible AI use; allow citizens to have more opportunities to be involved in the deliberation process; and form coalitions of like-minded states to advance shared digital values.
–Steven Feldstein, Senior Fellow, Carnegie Endowment for International Peace
|