Issue Brief: How Disinformation Impacts Politics and Publics

TWEET | SHARE | PDF

For additional resources on disinformation, visit our blog, power 3.0: Understanding Modern Authoritarian Influence. 

In this Brief:

  • How disinformation is used and consumed
  • Proactive and reactive disinformation strategies in different country contexts
  • The scale of the disinformation crisis

Expanding the Analytical Frame

Disinformation—the use of half-truth and non-rational argument to manipulate public opinion in pursuit of political objectives—is a growing threat to the public sphere in countries around the world. The challenge posed by Russian disinformation has attracted significant attention in the United States and Europe; over time, observers have noted its role in “hybrid warfare,” its use to degrade public trust in media and state institutions, and its ability to amplify social division, resentment, and fear.

But Moscow is merely the most prominent purveyor of disinformation, not its sole source. Political actors around the world, ranging in size from state agencies to individuals, have found ways to exploit the economics of digital advertising and the fast-paced nature of the modern information ecosystem for their political advantage. Growing appreciation of the problem’s scale invites a shift in frame: from national security threat from a discrete actor to a broader appreciation of political-economic weaknesses in the contemporary information space.

Disinformation has a wider variety of purposes, in a wider variety of settings, than is commonly appreciated. In the short term, it can be used to distract from an issue, obscure the truth, or to inspire its consumers to take a certain course of action. In the long-term, disinformation can be part of a strategy to shape the information environment in which individuals, governments, and other actors form beliefs and make decisions.

Disinformation has a wider variety of purposes, in a wider variety of settings, than is commonly appreciated.

Disinformation as a Reactive Tactic

In the short term, disinformation can be utilized reactively by different entities: for example, when Russian-backed fighters in Eastern Ukraine shot down a commercial airliner, Russian state media went into overdrive proposing multiple, often conflicting alternative explanations for the plane’s crash.

Disinformation’s applications have also been evident in Syria, where Russian diplomats, media, and intelligence services have falsified evidence, pushed misleading narratives, and spread falsehoods relating to the role of Russia’s airstrikes, as well as to obscure evidence of the Syrian government’s use of chemical weapons.

Another common technique is to react to a crisis by flooding the information space and drowning out discussion. After opposition protests broke out in Syria during 2011, newly-created Twitter accounts began harassing Syrian users, and social media researches allege that the Assad regime paid a public relations firm to flood opposition hashtags with photos of nature scenery and sports scores.

 

Bots and Trolls Shape Political Conversation Online

Online trolling, harassment, and distraction—especially by highly active automated accounts—are a key component of the modern disinformation purveyor’s toolkit. These techniques push independent voices out of public spaces and are sometimes considered a new form of political censorship. The Chinese Communist Party (CCP) was an early pioneer of this approach: for at least a decade, Beijing has deployed a “fifty-cent party” (apocryphally named for posters’ going rate per post) to “astroturf” support for the government and derail online political conversations that could spark mass mobilization. Recent estimates suggest this effort encompasses two million individuals, many of them state employees, and produces nearly 450 million social media posts per year.

Over time, similar approaches became a common aspect of authoritarian information manipulation and were later amplified through automation. In the early to mid-2000s, the Russian government began recruiting human commenters before later adopting the use of automated “bot” accounts. One study suggests that on Twitter more than half of tweets in Russian are produced by automated accounts. Aiming to avoid detection, many disinformation campaigns now avail themselves of accounts that are partially automated, partially controlled by human users; these are often referred to as cyborg or sock puppet accounts.

In recent years, the use of bots and trolls to shape online discussion has become so common across countries that it could be considered a widely exploited bug in the digital public square extending far beyond conflict or authoritarian settings. In Mexico, paid political consultants orchestrated the theft of campaign secrets and the large scale distribution of disinformation to voters. Such activity continues to this day, as pro-government accounts swarm political hashtags, threaten the lives of activists, and marginalize protesters.

In the Philippines, where the public square faces significant threats both online and off, interview-based research has explored a sophisticated “underground” public relations industry in which digital strategists, social media influencers, and paid commenters compete to deliver their clients the greatest degree of control over political narratives on the internet. In a stroke of market innovation, the subcontracting of digital disinformation in the Philippines has tied the financial and career incentives of competing freelancers to the objectives of national political parties, to devastating effect.

 

Proactive Disinformation and the “Demand Side” of the Challenge

The effectiveness of ‘reactive’ disinformation is limited by the unpredictability of real-world events. While it can offer those who use it a lifeline in times of crisis, reactive disinformation is by definition a response to unexpected, uncontrollable, or undesirable events and therefore generally used by those in disadvantageous strategic positions. Used proactively, disinformation provides much greater potential to move audiences to action, shape or confuse public understanding, and influence political events.

However, it does not provide a blank canvas on which to work. Effective disinformation campaigns usually draw on preexisting divides within target societies and produce content for which there is societal demand. Disinformation is at its most dangerous when amplifying existing political beliefs and divisions as opposed to introducing new beliefs or narratives into the public sphere. It is effective in doing so in part due to low trust in media and in part due to cognitive biases that make many consumers more likely to believe content that confirms their beliefs, to prefer partisan cheerleading over the conclusions of fact-checkers, and to share content that makes them angry or afraid. Research into the impact of social media use on political polarization is ongoing, but at a minimum suggests that the emergence of social media platforms as news sources has diminished the power of traditional “gatekeepers” of news and information. In turn, social media seems to have increased the social and political influence of a voracious subset of news consumers engaged in “motivated reasoning”—the selected interpretation of information to justify one’s preexisting beliefs, stances, or desires. These factors, combined with the speed at which information spreads online, create ideal conditions for disinformation campaigns.

Effective disinformation campaigns usually draw on preexisting divides within target societies and produce content for which there is societal demand.

Digital Disinformation Can Inspire Real-World Action

Proactive disinformation campaigns can achieve real-world impact by influencing the actions of its consumers. A prominent example comes from Germany’s 2016 “Lisa case,” which ignited nationwide debate over the country’s resettlement of Middle Eastern refugees and offered Moscow an opportunity to stoke divisions within Germany. Lisa, a thirteen year-old Russian-German girl, alleged that two migrant men kidnapped and raped her; the allegations were later proven to be untrue, but not before Russian state media actively spread the story and the Russian Foreign Minister publicly accused Berlin of a cover up. In Germany, thousands protested the government’s handling of the case. By using media and diplomatic resources to promote a false story at a time of rising German anti-migrant sentiment, Moscow sought to exploit domestic German political divides to encourage mass demonstrations and damage the German government politically.

Digital disinformation often promotes xenophobic sentiment, and hate speech is common. In India, far-right religious figures used messaging applications to spread false claims about religious minorities, sparking communal violence. In Indonesia, political and religious leaders have decried the spread of hate speech and rumors over social media, which played a pivotal role in the Jakarta mayoral election.

Mass media have been used to spread disinformation and hate speech in the past, and have played a key role in modern genocides. Social media is now playing a similar role in contemporary atrocities: in Burma, for instance, ultranationalist Buddhist monks have used social media to mobilize supporters and instigate violence against the Rohingya, a persecuted Muslim minority group.

 

Disinformation During Elections

Often, disinformation aims to influence citizens’ decisions to vote (or to abstain from voting). The use of disinformation around elections is probably only slightly younger than representative democracy itself, but the reach, speed, and low cost of disseminating disinformation over social media has amplified this problem.

The actors involved are often subnational political figures or organizations, although state organs are sometimes complicit. In South Korea, for example, the role of state-spread disinformation during the country’s 2012 presidential election was exposed after an investigation found that the National Intelligence Service generated more than 1.2 million Twitter messages supporting now-impeached South Korean President Park Geun-hye (or, as is often the case with disinformation, denigrating her rival).

The 2017 Kenyan elections offer a valuable case study in the widespread use of domestically sourced disinformation in an electoral context. As in the Philippine case cited above, rival political factions created sophisticated digital operations, conscripting influential social media personalities, paid commentators, and armies of bot accounts. Digital advertising techniques amplified the spread of hate speech and disinformation targeting political opponents. Hoax websites imitating real news outlets produced disinformation at an industrial scale, with one study finding that nine in ten Kenyans had seen false information about the election online, and 87 percent of respondents believing that information to be deliberately false. These techniques—not unique to Kenya—proved dangerous at an exceptionally contentious political moment in a country where the previous elections led to bloodshed.

 

Foreign-sourced Disinformation in Electoral Contexts

While disinformation frequently originates from domestic sources, some authoritarian governments increasingly use disinformation to influence elections beyond their borders. The Russian Federation stands out as the paramount example. Even a partial list of elections where Russian-produced or -supported disinformation has featured includes the French, German, and American elections in 2016 and 2017; the 2018 Czech presidential election; and the 2017 vote on Catalonian secession from Spain. In each of these cases, Moscow used a combination of state-owned international news outlets, smaller news sites linked to Moscow, and automated social media accounts, sometimes in tandem with leaks of stolen documents and communications.

It can be tremendously difficult to estimate the total effect of these simultaneous approaches, especially since international disinformation operations often imitate—or even promote—material produced by domestic actors. Sometimes, disinformation may flow the other way as it migrates from foreign sources to mainstream domestic news outlets.

Moscow is not the only actor in this space. While Beijing’s international media strategy differs substantially from Moscow’s, there is evidence it has experimented with disinformation in Taiwanese politics as part of a long-standing policy regarding unification between Taiwan and the People’s Republic of China.

Political actors have used disinformation for their benefit for millennia. However, the velocity and volume of disinformation in the contemporary information space seems to have amplified its effectiveness and left many members of the public increasingly angry, fearful, or disoriented.

Disinformation as a Strategic Approach

Not every disinformation campaign is linked to a specific event such as an election. Disinformation can also be used to alter the broader information space in which people discuss issues, form beliefs, and make political decisions; it is sometimes deployed to promote a larger narrative over time or to degrade civic discourse by promoting division or cynicism.

Political actors have used disinformation for their benefit for millennia. However, the velocity and volume of disinformation in the contemporary information space seems to have amplified its effectiveness and left many members of the public increasingly angry, fearful, or disoriented. This, in turn, leaves publics even more vulnerable to future manipulation, resulting in a cycle of declining public trust in objective sources of information which some analysts call “truth decay.”

Russian disinformation provides an instructive case study: at home and abroad, it draws on the principle that there is no such thing as objective truth. This allows Moscow to deploy multiple narratives and conspiracy theories when seeking to undermine public confidence in Western institutions, including claims that European politicians support Nazism in Ukraine, that the German government will pay for refugees and their “harems” to migrate to Europe, and that NATO planes spray mind-control chemicals over Poland. In addition to their explicit messages about Western wrongdoing, each of these stories implicitly suggests that Western media are concealing the truth from the public.

Consumers do not necessarily need to be persuaded by these stories—the introduction of doubt or anxiety may be enough to inspire distrust or political disengagement. In the case of the story about German taxpayers funding migrant harems, Moscow drew upon anti-migrant sentiment and resistance to German refugee policy to deepen political divides—not for the sake of inspiring immediate action, but because a divided and more fragile European Union serves Moscow’s geopolitical interests.

As with many of the applications of disinformation described above, it remains a mistake to believe this approach is only or even primarily adopted by state actors; subnational political actors, business interests, and other parties also draw from these practices. An example comes from South Africa, where wealthy industrialists with close ties to South African politicians hired a British PR firm to distract from growing political corruption by inflaming race relations. By combining media outlets owned by the industrialists with a “wildly successful” social media campaign, the firm temporarily distracted from an ongoing process of state capture by manipulating social divides over racial inequality.

The disinformation challenge is about more than authoritarian propaganda or PR techniques. Longstanding vulnerabilities in human cognition, combined with new and emerging technology’s impact on the information environment, allow for bad actors around the world to pursue political gains at the expense of democratic political discourse. The search for solutions must start by recognizing that the challenge is global and structural.

 

Brief prepared by Dean Jackson, International Forum for Democratic Studies. The author thanks Christina Apelseth for her research assistance.

 

FOR MORE “ISSUE BRIEFS,” CLICK HERE.


FOR MORE FROM NED’S INTERNATIONAL FORUM ON THIS TOPIC, SEE:

Dean Jackson’s February 2018 Q&A with Dipayan Ghosh on the “Commercial Drivers of Precision Propaganda.”

Dean Jackson’s February 2018 Q&A with Maria Ressa on “Digital Disinformation and Philippine Democracy in the Balance.”

Distinguishing Disinformation from Propaganda, ‘Fake News,’ and Misinformation,” an International Forum issue brief by Dean Jackson.

Can Democracy Survive the Internet?” an April 2017 Journal of Democracy article by Nathaniel Persily.

The Disinformation Crisis and the Erosion of Truth,” by Dean Jackson for the Power 3.0 blog.

The Authoritarian Capture of Social Media,” by Peter Kreko for the Power 3.0 blog.

Dean Jackson’s November 2017 Q&A with Jonathon Morgan on “Tracking Digital Disinformation.”

Dean Jackson’s July 2017 Q&A with Phil Howard on “Computational Propaganda’s Challenge to Democracy.”

 

Image Credit: Alexander Sviridov/Shutterstock

 

Share