Digital Directions: January 30, 2023

Insights on the evolving relationships among digital technologies, information integrity, and democracy from the International Forum for Democratic Studies at the National Endowment for Democracy.

SHARE ON FACEBOOK | SHARE ON TWITTER | FORWARD TO A FRIEND

  • Today, we welcome a guest author for our Big Story: Chat GPT. Despite the opportunities generative AI models like ChatGPT present, they could also distort the information environment and amplify authoritarian information activities.
  • The Kremlin is deepening its information operations in the Global South—especially in Latin America and the Caribbean—to build greater support for its invasion of Ukraine.
  • Amidst a global decline in internet freedom, democracies like Costa Rica offer lessons about how states can effectively safeguard digital rights in the interconnected online ecosystem.

The above image was created using another generative AI tool, DALL-E 2.

generative ai and the information space: will chatgpt be a boon to disinformation at scale? // january 30

by Joslyn Brodfuehrer, Program Assistant, International Forum for Democratic Studies…and ChatGPT

ChatGPT is a cutting-edge artificial intelligence technology that can generate a wide variety of human-like text, from creative fiction and poetry to news articles and legal briefs. However, its potential to produce highly convincing disinformation at scale poses a significant threat to the integrity of the information space. The technology has the potential to greatly amplify the reach and impact of false information, to make it more difficult for the general public to discern credible information, and to further undermine trust in democratic institutions. But, it also presents an opportunity for the counter-disinformation community to develop new and more effective tools for detecting and combatting disinformation.

If you thought a human penned that introduction, you would be wrong. That was my Big Story co-author ChatGPT, a generative artificial intelligence (AI) model unveiled by OpenAI last November.

Generative AI is a type of artificial intelligence that uses machine learning algorithms to generate text and other novel content. Although not the first of its kind, what sets ChatGPT—a supercharged derivative of its predecessor GPT-3—apart is its speed and human-like output. The exemplar chatbot uses a large language model to complete tasks ranging from generating tweets to debugging code, producing responses that, at times, are indiscernible from those of a human source. In a fractured online information environment overwhelmed by rampant mis- and disinformation, experts fear ChatGPT will further distort and diminish trust in the information we consume. It might also be the tool malign actors need to turbocharge their influence activities in the information space.

In an interview for the New York Times, AI expert Gary Marcus called the release of ChatGPT a “Jurassic Park moment.” While the technology’s eerily convincing responses position it to become a major disrupter capable of supplanting traditional search engines, there is one critical flaw: generative AI does not always tell the truth. Because natural language processing (NLP) models generate outputs based on learned patterns and lack mechanisms for verifying the integrity of their sources, they are prone to bias and often blend fact with fiction, making them inherently unreliable. Misleading results that are well-packaged as facts may leave users with good intentions trusting and, perhaps inadvertently, spreading misinformation.

ChatGPT, like other AI-enabled tools, mirrors the imperfections of the datasets that trained it. NLP models pick up and parrot the best and worst of the internet, as exemplified by Microsoft’s since-canceled chatbot Tay that spewed racist, misogynist, and otherwise vulgar tweets. Users have found ChatGPT to be no less biased. Without improved testing and design mitigations that safeguard against unreliable training data, online mis- and disinformation could also pollute the well from which ChatGPT generates its responses.

The danger of these language models lies in the façade of their output. ChatGPT’s algorithm is programmed to prioritize fluid language delivery over substance, presenting seemingly authoritative information irrespective of factual accuracy. Consider the following two prompts as use cases:

When I (a human!) asked ChatGPT to describe the impact of disinformation in Latin America and cite its sources, it generated a confident and coherent response. Although the AI model was unable to cite its sources, it referred me to three studies on the same topic for further research. My colleagues and I searched for ChatGPT’s informational references and came up empty-handed. While the research institutions were legitimate and had some resources on the topic, the specific articles ChatGPT suggested could not be found.

But not all users have benign motives. In response to my second search query, portrayed above, ChatGPT was able to compose an intelligible explanation of why Chinese propaganda in Congo-Brazzaville was beneficial for the Congolese people, as a PRC-based propagandist might hope to convey to readers. Such searches beg questions about the technology’s more concerning applications for potential purveyors of disinformation and propaganda.

Experts fear authoritarian regimes or sources of extremist or hate speech may weaponize sophisticated language models like ChatGPT to accelerate their information manipulation activities around the globe. Generative AI equips these malign actors—who already have access to many dissemination channels—with an automated “means of production” to generate cost-effective disinformation at scale. Rather than rely on humans manipulating social media platforms, machines enable purveyors of disinformation to produce unprecedented volumes of coordinated messaging with potentially enough variation to go undetected.

Although generative AI signals an era of automated influence, malign actors do not have a monopoly on the emergent technology. Opportunities exist for the counter-disinformation community to leverage AI, and some are already creatively using the technology to restore public trust in the media and maximize impact. A 2021 International Forum essay explains how some African outlets, like Kenya’s Standard Group, are using new automated models to disseminate content, such as a WhatsApp chatbot that provides credible news in response to users’ questions. Other tools use machine learning to detect bots and track dubious social media accounts (e.g., Bot Sentinel and BotSlayer). The International Conference on Machine Learning’s recent conference on machine learning techniques for disrupting disinformation suggests experts are continuing to think innovatively. Researchers are also using AI-enabled technologies to uncover AI-generated text—and in the wake of ChatGPT’s release, some have already sprung to action to create tools that detect ChatGPT-generated content.

While promising, building information space integrity in the age of rapidly developing generative AI will require continued civil society adaptation. Application of large language models could scale counter-disinformation responses and preempt autocrats’ efforts to further distort the information space using tools like these.

russia’s “deeply sinister to absolutely absurd” ukraine narratives

Russian propaganda about the full-scale invasion of Ukraine has several defining narratives: to question Ukraine’s legitimacy as an independent state, describe the invasion as a defensive move, and deny or deflect blame for atrocities. Kremlin-sourced disinformation has been characterized as “deeply sinister to absolutely absurd,” but its narratives resonate with many. Recent research from Ukraine’s Detektor Media has found that Ukraine is the Kremlin’s most common disinformation topic, appearing mainly on social media.

latin america’s vulnerability to russian disinformation

As Russian propaganda adapts to the changing landscape in Europe, Kremlin propaganda has moved further into places where the opposition is not so robust, especially in the Global South. In Latin America, Russia has attempted to influence public opinion through social media and state-backed broadcasting. One study found that Kremlin-backed posts on Facebook had largely positive reactions and high user engagement. The pervasiveness of such disinformation in Latin America, coupled with low media literacy and trust in the media, leaves the region vulnerable.

deeper safety risks from changes at twitter

Elon Musk’s Twitter takeover has rippled across the global information space. The platform’s Trust and Safety Council has been dissolved, safety-related jobs have been eliminated, and many previously banned accounts have been welcomed back—leading many users to migrate off of the site. (Allie Funk of Freedom House recently wrote for the International Forum about this exodus towards other platforms.) Several journalists have been suspended from the platform, including those who write critically about Musk.

china-sourced surveillance foothold grows in zimbabwe

President Emmerson Mnangagwa has leaned on foreign surveillance technology to stifle dissent and shrink Zimbabwe’s civic space since a coup in 2017. Beijing is the source of most of this technology, fitting within a broader pattern of China-Africa cooperation on digital infrastructure. PRC vendors and lenders have partnered with the Zimbabwean government on telecommunications infrastructure, data centers, facial recognition systems, and smart city projects. (Digital authoritarian risks around smart cities were examined in a recent report by the International Forum.)

costa rica’s model for internet freedom

Freedom House’s Freedom on the Net contributors argue that Costa Rica’s positive record on internet freedom offers lessons about the key ingredients for protecting digital rights. A successful multistakeholder approach has enabled institutions across Costa Rica’s government, civil society, and the private sector to expand digital infrastructure and combat misinformation while still protecting free speech and boosting cybersecurity. For example, the country’s electoral court collaborated with Facebook to create a system to remove electoral misinformation ahead of the 2022 ballot.

The Weaponization of COVID-19 Data

Since the start of the COVID-19 pandemic, a broad spectrum of governments have collected data on individuals through technological tools advertised as key to stopping the virus’s spread. The Associated Press reports that these tools will likely have ongoing ramifications for civil liberties in authoritarian and democratic states. In China, app-based health codes have been used to obstruct travel by potential protesters. In more democratic settings such as Australia, India, and Israel, the repurposing of data from COVID-19 apps and the expanded use of techniques such as facial recognition and cell phone tracking have drawn concern. (See here for the Forum’s research on COVID-19’s impact on democracy.)

The International Forum will hold a virtual public event examining smart cities in the context of global backsliding on Tuesday, February 7, from 12:00 p.m. to 1:30 p.m. EST (RSVP HERE). Bárbara Simão and Roukaya Kasenally, contributors to the International Forum’s recent Smart Cities and Democratic Vulnerabilities report, will discuss smart city governance in Brazil and Mauritius, with comments from Larry Diamond on the wider implications for struggling democracies.

In a new Power 3.0 blog post, Allie Funk analyzes the mass exodus from U.S.-based social media platforms and the growing interest in diverse alternatives. However, many of these new platforms, both state and privately owned, are targets of government censorship and surveillance.

The Journal of Democracy released its January 2023 issue. In one of the essays, “China’s Threat to Global Democracy,” Michael Beckley and Hal Brands discuss Beijing’s “effort to make the world safe for autocracy and to corrupt and destabilize democracies.”


Thanks for reading Digital Directions! If you enjoy this newsletter, forward it to a friend or share on social media so that others can subscribe.

 

Share