generative ai and the information space: will chatgpt be a boon to disinformation at scale? // january 30
by Joslyn Brodfuehrer, Program Assistant, International Forum for Democratic Studies…and ChatGPT
ChatGPT is a cutting-edge artificial intelligence technology that can generate a wide variety of human-like text, from creative fiction and poetry to news articles and legal briefs. However, its potential to produce highly convincing disinformation at scale poses a significant threat to the integrity of the information space. The technology has the potential to greatly amplify the reach and impact of false information, to make it more difficult for the general public to discern credible information, and to further undermine trust in democratic institutions. But, it also presents an opportunity for the counter-disinformation community to develop new and more effective tools for detecting and combatting disinformation.
If you thought a human penned that introduction, you would be wrong. That was my Big Story co-author ChatGPT, a generative artificial intelligence (AI) model unveiled by OpenAI last November.
Generative AI is a type of artificial intelligence that uses machine learning algorithms to generate text and other novel content. Although not the first of its kind, what sets ChatGPT—a supercharged derivative of its predecessor GPT-3—apart is its speed and human-like output. The exemplar chatbot uses a large language model to complete tasks ranging from generating tweets to debugging code, producing responses that, at times, are indiscernible from those of a human source. In a fractured online information environment overwhelmed by rampant mis- and disinformation, experts fear ChatGPT will further distort and diminish trust in the information we consume. It might also be the tool malign actors need to turbocharge their influence activities in the information space.
In an interview for the New York Times, AI expert Gary Marcus called the release of ChatGPT a “Jurassic Park moment.” While the technology’s eerily convincing responses position it to become a major disrupter capable of supplanting traditional search engines, there is one critical flaw: generative AI does not always tell the truth. Because natural language processing (NLP) models generate outputs based on learned patterns and lack mechanisms for verifying the integrity of their sources, they are prone to bias and often blend fact with fiction, making them inherently unreliable. Misleading results that are well-packaged as facts may leave users with good intentions trusting and, perhaps inadvertently, spreading misinformation.
ChatGPT, like other AI-enabled tools, mirrors the imperfections of the datasets that trained it. NLP models pick up and parrot the best and worst of the internet, as exemplified by Microsoft’s since-canceled chatbot Tay that spewed racist, misogynist, and otherwise vulgar tweets. Users have found ChatGPT to be no less biased. Without improved testing and design mitigations that safeguard against unreliable training data, online mis- and disinformation could also pollute the well from which ChatGPT generates its responses.
The danger of these language models lies in the façade of their output. ChatGPT’s algorithm is programmed to prioritize fluid language delivery over substance, presenting seemingly authoritative information irrespective of factual accuracy. Consider the following two prompts as use cases:
When I (a human!) asked ChatGPT to describe the impact of disinformation in Latin America and cite its sources, it generated a confident and coherent response. Although the AI model was unable to cite its sources, it referred me to three studies on the same topic for further research. My colleagues and I searched for ChatGPT’s informational references and came up empty-handed. While the research institutions were legitimate and had some resources on the topic, the specific articles ChatGPT suggested could not be found.
But not all users have benign motives. In response to my second search query, portrayed above, ChatGPT was able to compose an intelligible explanation of why Chinese propaganda in Congo-Brazzaville was beneficial for the Congolese people, as a PRC-based propagandist might hope to convey to readers. Such searches beg questions about the technology’s more concerning applications for potential purveyors of disinformation and propaganda.
Experts fear authoritarian regimes or sources of extremist or hate speech may weaponize sophisticated language models like ChatGPT to accelerate their information manipulation activities around the globe. Generative AI equips these malign actors—who already have access to many dissemination channels—with an automated “means of production” to generate cost-effective disinformation at scale. Rather than rely on humans manipulating social media platforms, machines enable purveyors of disinformation to produce unprecedented volumes of coordinated messaging with potentially enough variation to go undetected.
Although generative AI signals an era of automated influence, malign actors do not have a monopoly on the emergent technology. Opportunities exist for the counter-disinformation community to leverage AI, and some are already creatively using the technology to restore public trust in the media and maximize impact. A 2021 International Forum essay explains how some African outlets, like Kenya’s Standard Group, are using new automated models to disseminate content, such as a WhatsApp chatbot that provides credible news in response to users’ questions. Other tools use machine learning to detect bots and track dubious social media accounts (e.g., Bot Sentinel and BotSlayer). The International Conference on Machine Learning’s recent conference on machine learning techniques for disrupting disinformation suggests experts are continuing to think innovatively. Researchers are also using AI-enabled technologies to uncover AI-generated text—and in the wake of ChatGPT’s release, some have already sprung to action to create tools that detect ChatGPT-generated content.
While promising, building information space integrity in the age of rapidly developing generative AI will require continued civil society adaptation. Application of large language models could scale counter-disinformation responses and preempt autocrats’ efforts to further distort the information space using tools like these. |