Big Question: How Does Digital Privacy Matter for Democracy and its Advocates?

Edited by Maya Recanati and Beth Kerley

Emerging technologies are creating new digital privacy risks that can undermine key democratic values and practices. For example, generative AI tools simultaneously increase incentives for data collection and accelerate authoritarian influence operations; 5G networks enable more and more devices to join the Internet of Things (IoT); and immersive technologies such as augmented and virtual reality collect unprecedented types of information on users. Amid this rapidly changing landscape, authoritarian actors like China are exporting surveillance technologies that enable mass data collection and processing to democracies and autocracies alike, impacting the work and safety of democratic activists and creating risks for everyday citizens. There is not yet a shared understanding of digital privacy’s implications for democracy, particularly across varying political and legal contexts. While the EU’s landmark General Data Protection Regulation (GDPR) has set a legal benchmark for data privacy in many global settings, enforcement challenges and the evolving nature of digital technologies themselves leave many fundamental questions in this space unresolved.  

Given these challenges, the International Forum for Democratic Studies asked five leading experts to consider the following question: How does digital privacy matter for democracy and its advocates? In what ways does the collection of digital data create risks to your own work or that of other democratic activists? 


 

Adrian Shahbaz, Freedom House

Every day, we generate huge amounts of information about our own movements, habits, and social interactions—all collected by commercial entities. Advances in artificial intelligence (AI) have increased the speed and scale at which these massive datasets can be monitored and analyzed. Around the world, governments have partnered with the private sector to leverage AI for controlling their populations and enacting various forms of digital repression. The risk is most acute in countries with weak institutional safeguards and poor respect for human rights.  

Governments increasingly rely on AI to help stamp out dissent. For example, authorities in Kazakhstan purchased automated tools to “discover information that poses a threat to socio-political stability” and to analyze “the likelihood of protests” in a particular region. Cameras equipped with facial recognition technology have expanded invasive monitoring from social media to the public square. Police in Moscow used facial recognition tools to identify and preemptively detain dozens of journalists and activists over concerns they were headed to a protest. Iranian officials warned that facial recognition systems would be used to punish women for “failure to obey hijab laws.” 

AI has also exacerbated discrimination against people for their gender, sexuality, belief, and ethnicity. In Azerbaijan, security agencies used surveillance products to determine people’s “sexual inclinations” from their Facebook accounts during a violent crackdown on the country’s LGBT+ community. An investigation found that Indian law enforcement’s facial recognition systems have been inaccurate and disproportionately impact the rights of Muslims. Chinese authorities purchased facial recognition cameras with the specific purpose of profiling and monitoring people based on their ethnicity, particularly Uyghurs and other Muslims. At a police conference in Dubai, several companies tried to sell “emotion recognition” technology—which has faced criticism for its lack of scientific basis and high risks to human rights—to officials from repressive governments.  

Digital privacy is necessary for democracy to thrive. As digital monitoring expands from online to offline spaces and eventually into our physical bodies, digital privacy has become fundamental to preserving our understanding of freedom, autonomy, and human dignity—the values underpinning democracy itself.  

As digital monitoring expands from online to offline spaces and eventually into our physical bodies, digital privacy has become fundamental to preserving our understanding of freedom, autonomy, and human dignity—the values underpinning democracy itself. 

Adrian Shahbaz is Vice President for Research and Analysis at Freedom House. Follow him on LinkedIn and Twitter.

 

Andrej Petrovski, SHARE Foundation

Amid rapid technological advances, protecting privacy is critical to ensure democracy’s survival. Currently, data-driven technologies operate on power- and profit-seeking principles that tilt the scales against independent civic actors. From campaigning to policing, the uncontrolled uptake of digital tools threatens civic space, free expression, and other human rights—especially in fragile democracies.  

For instance, political advertisers rely on personal information—collected by tech companies and data brokers with little regard for privacy—to target their messages online. This system can be easily manipulated by authoritarian-leaning actors who have the resources to flood the information space with polarizing and misleading advertisements. For example, as SHARE (my organization) reported, the governing party spent more on advertising on Meta platforms than all other political groups combined during the 2022 elections. This digital strategy enabled the party to circumvent transparency and spending rules that apply to traditional advertising. Additionally, because online ad auctions award space to the highest bidder, some of these audiences were unlikely to see ads from other candidates at all. 

Large-scale personal data collection together with advances in artificial intelligence (AI) also augment state surveillance. Mass data processing technologies exported from autocracies like China—such as the Huawei facial recognition cameras deployed by authorities in Belgradecan be used to target democratic activists, compromising their work and safety. Surveillance can also lead to self-censorship and, by extension, a decline in civic engagement. Sources may be less willing to meet with journalists, for instance, where mass biometric surveillance catalogues every human transaction in a digital database. In this context, civic actors are unable to thrive amid a system intolerant of diversity.    

Mass data processing technologies exported from autocracies like China—such as the Huawei facial recognition cameras deployed by authorities in Belgradecan be used to target democratic activists, compromising their work and safety.

The intersection of digital privacy and democratic backsliding is especially pronounced in “swing states” on the border between democracy and autocracy, where authoritarian actors stand ready to exploit personal data.  

In the face of escalating digital privacy risks, the democratic community must devise strategies to prevent weaker democracies from sliding toward autocracy. More robust regulation of the digital surveillance tools that would-be-autocrats are exploiting, especially those involving AI, is needed. Effective safeguards should not only regulate data collection and usage, but also ensure transparency and accountability from both governmental and private entities that wield powerful data-driven technologies.  

Andrej Petrovski is the Director of Tech/CERT at SHARE Foundation and Board Member at EDRi, based in Belgrade, Serbia, with a background in software engineering and a master’s degree in electronic crime and digital forensics. Follow him on Twitter.

 

Lindsay Gorman, German Marshall Fund

Artificial intelligence (AI) drives value creation by processing massive quantities of data into models of human thought, speech, activity, and function. As actors across the globe capitalize on this innovation, the foundational connection between privacy and free societies is at risk.  

Privacy is fundamental to human dignity and self-determination, and so protected under Article 12 of the Universal Declaration of Human Rights. Privacy establishes divisions between the political and personal spaces individuals occupy. In doing so, it fortifies free expression and assembly, laying the groundwork for democratic activism free from tracking, intimidation, and censorship. For these reasons, privacy is often considered a gateway right in liberal democratic institutions that undergirds the exercise of every other right. In autocratic societies, a fusion of the personal and political spheres fosters state control, depriving individuals of spaces in which they can safely dissent. 

This discussion is by no means theoretical. In 2019, pro-democracy protestors in Hong Kong sported umbrellas to hide their faces, sprayed paint on cameras, and sawed down a smart lamppost in an effort to escape growing techno-surveillance. In concert with its sophisticated online censorship regime, the People’s Republic of China (PRC) is pioneering integrated AI-driven surveillance systems leveraging facial recognition, DNA and cyber surveillance, and location monitoring. “Safe city” systems exported along the PRC’s “Digital Silk Road” promise Minority Report-style detectors to anticipate crime before it happens, tipping the scales in favor of would-be dictators in even partly free societies.   

In states like the PRC, where the separation between public and private sectors exists in name only, the inverse correlation between AI-driven economic opportunity and individual privacy is a recipe for tightening autocratic control. Globally, the spread of this techno-authoritarian approach threatens to deepen democratic backsliding by providing ready-made tools for autocratic leaders and governments to quash dissent.  

In states like the PRC, where the separation between public and private sectors exists in name only, the inverse correlation between AI-driven economic opportunity and individual privacy is a recipe for tightening autocratic control.

To begin to reverse this trend, democracies must bring innovation to the table alongside law and policy. While AI systems that incentivize mass data collection may be the readiest opportunity, they need not be the only one. Privacy-preserving AI techniques that train machine learning models without compromising the privacy of underlying datasets should be advanced and adopted broadly.  

Lindsay Gorman is the Senior Fellow and Head of Technology and Geopolitics at the German Marshall Fund’s Alliance for Security Democracy. A physicist and AI engineer by training, she served in the White House from 2021 to 2022 as an advisor on technology strategy and national security. Follow her on LinkedIn and Twitter. 

 

Thobekile Matimbe, Paradigm Initiative

Globally, digital surveillance and illicit data collection by states increasingly threatens the space within which human rights defenders and journalists perform their critical work. Many governments seem inclined to treat smartphones and laptops as dangerous weapons, rather than what they actually are—enabling tools that ensure the free flow of information, protected under international human rights agreements. By eroding civic space and undermining freedom of expression, these assaults on digital privacy ultimately threaten the practice of democracy itself. 

Many governments seem inclined to treat smartphones and laptops as dangerous weapons, rather than what they actually are—enabling tools that ensure the free flow of information, protected under international human rights agreements.

This issue has become particularly acute in Africa, where owning the basic digital tools needed to share and access information can pose risks to activists and journalists. State authorities arbitrarily seize personal devices and capitalize on their widespread use to access private communication. For example, on April 5, 2022, police arbitrarily detained Malawian journalist Gregory Gondwe, seized his devices, and accessed personal messages. Similarly, in Zimbabwe, the government seized devices from about 40 civil society actors who had been arrested in August 2023 over the monitoring of presidential elections held earlier that month.  

In addition to snooping on independent civic actors, political forces also seek illicit data access in order to tilt the electoral playing field. Ahead of the August 2023 election in Zimbabwe, voters received text messages from an unknown sender campaigning for President Emmerson Mnangagwa. Following public outcry, it was reported that the Zimbabwe Electoral Commission had leaked the phone numbers of registered voters to the ruling political party. 

Elsewhere, governments have introduced repressive laws and practices that undermine anonymity online. In October 2023, for instance, regulators in Tanzania directed all individuals and companies using Virtual Private Networks (VPNs) to declare this usage and provide information including their Internet Protocol (IP) addresses.  

Measures like these narrow civic space, compromise electoral integrity, and make it harder for citizens to safely share and access information online. They chill civic activity by subjecting human rights defenders and media practitioners—and even just engaged, voting citizens—to unwelcome intrusions in their confidential communications. In this context, meaningful digital privacy protections are crucial to ensure that free expression and access to information can thrive. 

Thobekile Matimbe serves as Senior Manager, Partnerships and Engagements at Paradigm Initiative (PIN), advancing digital rights and inclusion. She is a lawyer and researcher with civic engagement and human rights advocacy expertise. Follow her on LinkedIn and Twitter.

Elizabeth Donkervoort, China Strategic Risks Institute

Privacy is a gateway right foundational for human rights such as freedom of expression, thought, belief, association, assembly, and non-discrimination. It encompasses rights to secrecy and anonymity, which are vital in developing our personality and interactions with the world.  

Yet the dynamic nature of privacy, which is shaped by cultural, social, and individual norms and contexts—for example, you share different information with your friends than your boss—makes it complicated to regulate. As the Cambridge Analytica scandal highlighted, privacy is also networked: Our privacy may be at risk due to information gathered and shared by others about us (e.g. a company sharing your image on their Facebook page) rather than information we ourselves choose to share. 

More broadly, the business model of tech companies, focused on collecting, analyzing, and selling vast user data, conflicts with the safety needs of human rights defenders. Activists currently face a Faustian bargain: They must use a system founded on the violation of privacy rights to communicate with the world about this same abuse and others.  

Emerging technologies like facial recognition threaten traditional expectations of anonymity in public spaces, further undercutting human rights. In addition to their use to identify and arrest protesters, facial recognition systems pose a risk to political dissidents, members of vulnerable communities such as LGBT+ people, and other potential targets as they go about their personal lives. Faceprints (i.e. digital facial mappings) exacerbate privacy concerns by enabling ongoing tracking and AI training after original images are deleted.  

Legal actions against Meta and Clearview AI highlight the potential to compel due diligence in the use of data sets. However, privacy’s dynamic nature and the structure of AI models make solutions such as exact data deletion (removing all traces of a specific piece of personal data) almost impossible.  

Moreover, the global technology discourse that shapes privacy standards often fails to resonate in global majority countries due to its Eurocentric bent—a predictable outcome given the underrepresentation of these countries at forums, like the UK’s recent AI Safety Summit, where these issues are discussed. This disconnect, in turn, yields an opening in these underrepresented societies for oppressive or autocratic regulatory models such as those promoted by China. 

Many barriers inhibit civil society participation in developing privacy norms, including a lack of technical expertise and access to key decision makers. Nonetheless, it is essential that a diverse range of stakeholders develop laws and policies which create enabling conditions for privacy and recognize the networked nature of information sharing 

Our privacy may be at risk due to information gathered and shared by others about us rather than information we ourselves choose to share. … It is essential that a diverse range of stakeholders develop laws and policies which create enabling conditions for privacy and recognize the networked nature of information sharing.  

Elizabeth Donkervoort is a director for the newly founded China Strategic Risks Institute and is also program director for the American Bar Association’s Center for Global Programs, overseeing East Asia, Internet Freedom, and Malign Influence programs around the world. This submission is being made in her personal capacity and does not reflect policy of either institution. 

 

CLICK HERE FOR MORE “BIG QUESTIONS.”


Respondents’ answers have been edited for length and clarity, and do not necessarily reflect the views of the National Endowment for Democracy. Image Credit: Getty Images / DrAfter123

Share