Digital Directions: July 12, 2022

 

Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.

 
  • AI-powered tools are opening the doors to surveillance, presenting new risks in young and fragile democracies;
  • Russia is scaling its censorship and network penetration both in Ukraine and worldwide;
  • Disinformation is intensifying in Brazil and the Philippines as a result of authoritarian tactics from their leaders.
Share on Facebook Share on Facebook
Share on Twitter Share on Twitter
Forward to Friend Forward to Friend

 

The GLobal struggle over ai surveillance

Through several key advances that enable approaches like facial recognition, social media monitoring, and smart policing techniques, AI technology is extending the power of states to monitor citizens. While entrenched autocracies make eager use of these new capacities, more open political systems also incorporate AI surveillance tools, raising troubling questions about the impact on due process, free expression, and active citizenship. Furthermore, and in the context of global democratic backsliding, unregulated AI surveillance threatens to widen gaps in the rule of law, tilting the playing field toward illiberal governments in settings where checks and balances are already weakened.

My new essay for the International Forum for Democratic Studies’ report The Global Struggle Over AI Surveillance: Emerging Trends and Democratic Responses shows how surveillance risks extend across regime types. Beijing uses both biometric surveillance and social media monitoring to create an integrated system of physical and digital control in Xinjiang. Although this example represents an extreme case, the potential for surveillance breakthroughs to subvert privacy, facilitate political persecution or group discrimination, and erode freedoms of expression and association is not unique to autocracies. The risks that advanced surveillance technologies pose are particularly acute in weak democracies and hybrid regimes—“swing states,” here defined according to V-Dem’s global democracy scores, where political environments remain partly open but key liberal-democratic guardrails are weakened or absent in ways that could heighten the appeal of authoritarian digital models.

AI surveillance technology is increasingly accessible, especially as its cost comes down and relevant components become more affordable. Companies based in the People’s Republic of China (PRC) are at the forefront of this market, honing these technologies at home and exporting AI surveillance tools to governments worldwide.

Yet companies based in OECD countries also contribute actively to this growing market, selling predictive policing software, facial recognition algorithms, and social media surveillance applications to both democratic and authoritarian clients, including the PRC. State-led measures to combat the COVID-19 pandemic have only heightened demand for these technologies. There is a risk that erosions of data privacy and use of increasingly invasive surveillance measures will persist beyond the pandemic.

The lack of broad, global norms to govern this technology’s implementation and operation heightens the risk of misuse of AI surveillance tools around the world. Though multilateral fora have made progress in establishing agreements on high-level AI ethical principles, it is unclear how governments or companies will instill these concepts in the development and deployment of AI systems.

Civil society organizations (CSOs) have a critical role to play in shaping AI surveillance practice and policy—particularly as authorities are often inclined to make decisions on these issues in the dark, overlooking human rights principles and other social concerns. CSOs can build awareness about this issue, encourage public debate about the contours of AI surveillance projects, and monitor these initiatives for signs of abuse.

Despite recent progress in addressing this issue, safeguards to rein in abuses of AI surveillance tools remain elusive. Governments worldwide should be more transparent about how they use AI technology. They must also move beyond promulgating high-level AI ethical principles and establish concrete benchmarks and regulations for responsible AI use that reflect international human rights law and standards. Moreover, governments should encourage more effective oversight to better ensure appropriate assessment of surveillance impacts, engaging with outside researchers and CSOs at all stages of this process. Finally, they should work to establish an enduring multistakeholder body mandated to tackle a wide array of surveillance issues. Private companies also need to be more proactive in assessing and addressing human rights impacts.

The PRC is moving rapidly to write the rules for AI systems, and democracies should push back to develop frameworks that ensure responsible use of these technologies. Democracies must define regulatory norms to guide responsible AI use; allow citizens to have more opportunities to be involved in the deliberation process; and form coalitions of like-minded states to advance shared digital values.

–Steven Feldstein, Senior Fellow, Carnegie Endowment for International Peace

 

 

CENSORSHIP INTENSIFIES IN THE PHILIPPINES: Authorities in the Philippines have ordered a second historic shutdown of Rappler, an investigative news site founded by Nobel Laureate Maria Ressa (the first shutdown occurring in 2018). This order comes amid concerns about increased media suppression and free speech violations by incoming President Ferdinand Marcos Jr. As reported by Rappler, Marcos leveraged an alliance with outgoing President Duterte to spread propaganda and promote their authoritarian practices. Duterte himself has manipulated online discourse and targeted regime critics.

GROWING DISINFORMATION IN BRAZIL AHEAD OF ELECTIONS: Rampant disinformation is flooding Brazil’s information ecosystem ahead of the country’s October elections. Incumbent president Jair Bolsonaro has already spread disinformation about voting irregularities and claims that the election will be rigged. A legal resolution earlier this year theoretically banned mass messaging of this kind, but the likelihood of enforcement is questionable. Nieman Labs suggests that platforms should clarify content moderation policies and become more transparent about how these policies are enforced in practice.

 

 

MICROSOFT PUBLISHES REPORT ON LESSONS FROM UKRAINE: Microsoft published research from the company’s data science and intelligence teams, aiming to enhance understanding about the cyber landscape in Ukraine. Russia has enhanced its network penetration and surveillance activities outside of Ukraine and conducts foreign influence operations to scale its war efforts — Bulgaria is one country where such activities seem to have had a strong impact. The report includes a call to action for collaboration among government, NGOs, and the private sector, directed at detecting, defending, disrupting, and deterring Russian threats.

IN OCCUPIED REGIONS, UKRAINE’S INTERNET IS ROUTED TO RUSSIA: Internet service providers in occupied areas of Ukraine have been forced to redirect services through Russian providers, or else shut down communications. As a result, web traffic is being routed through Russia’s censorship and surveillance network. Moscow is attempting to control telecommunications networks in Ukraine by offering blank SIM cards and moving users over to Russia-linked mobile companies.

 

USAID RELEASES ARTIFICIAL INTELLIGENCE ACTION PLAN: The recently released USAID Artificial Intelligence Action Plan was crafted to develop a responsible approach to AI in USAID’s activities. The action plan, as summarized by ICTworks, makes a more formal commitment to responsible AI in USAID programming, calls for focused attention to AI in efforts to strengthen digital ecosystems through governance and oversight, and prioritizes the support of civil society groups and activists to facilitate an inclusive AI agenda.

RUSSIAN USE OF VPN INCREASES: Russian citizens have increasingly turned to VPN software to access blocked websites such as social media platforms and news media, determined to work around government restrictions and access independent reporting. In response, state censor Roskomnadzor has been seeking to block VPN traffic. An analysis of Russia’s public procurement database, however, found that Russian companies and government agencies have also invested heavily in VPN software. 236 contracts for VPN technology have been made since Russia’s invasion of Ukraine, and domestic demand for VPN apps has skyrocketed.

 

 

SHEDDING LIGHT ON AI SURVEILLANCE: In the International Forum’s new publication, The Global Struggle Over AI Surveillance: Emerging Trends and Democratic Responses,” Steven Feldstein assesses the global spread of AI surveillance tools, while Eduardo Ferreyra and Danilo Krivokapić offer two case studies on civil society responses to opaque surveillance deals.

PLATFORMS AND THE PRESS: In a report by the Center for International Media Assistance, Making Big Tech Pay for the News They Use,” Courtney Radsch investigates how the relationship between tech giants and the news industry can be rebalanced.

POWER 3.0 BLOG POSTS: Two new Power 3.0 blog posts feature original research on authoritarian-sourced disinformation networks. Marc Owen Jones analyzes a network of fake Chinese language Twitter accounts amplifying pro-Beijing, pro-Moscow, anti-democratic, and anti-Western narratives. Claudia Flores-Saviaga and Deyra Guerrero describe Latin American fact-checking organizations and innovative cross-regional collaborations as they try to keep pace with Kremlin-backed disinformation.

JOURNAL OF DEMOCRACY ON DIGITAL AUTHORITARIANS: The April  Journal features an article by Ronald J. Deibert on Subversion Inc: The Age of Private Espionage, which is drawn from Deibert’s 2021 Seymour Martin Lipset Lecture on Democracy in the World hosted by NED. Meanwhile Samantha Hoffman, author of a 2021 Forum report on sharp power and emerging technologies, examines China’s Tech-Enhanced Authoritarianism.”

HAPPY HOUR: Join NED, NDI, IRI, and IREX for a happy hour and discussion of the Forum’s AI surveillance report at the next Technology for Democracy Happy Hour on July 13th from 5:30pm to 7:30 p.m. at the Admiral in Washington, DC.

 


Thanks for reading Digital Directions, a monthly newsletter from the International Forum. 

Sign up  for future editions here!

 

Twitter
Facebook
Instagram
YouTube
International Forum's Website
Share