Issue Brief: The ‘Demand Side’ of the Disinformation Crisis

TWEET | SHARE | PDF

For additional resources on disinformation, visit our blog, power 3.0: Understanding Modern Authoritarian Influence. 

In this Brief:

  • The cognitive factors which make audiences vulnerable to disinformation
  • Technological factors driving the consumption and spread of disinformation
  • Implications of disinformation’s “demand side” for the democratic response

Supply and Demand for Disinformation

Until recently, most efforts to understand disinformation’s challenge to political and media freedom have focused on the supply of content intended to mislead people and undermine political opposition. Consequently, most efforts to respond to the challenge have focused on exposing sources of disinformation, debunking its narratives, and detailing its distribution.

These efforts are key components of a resilient information space. However, they often fail to appreciate the market logic of disinformation, in which supply meets audience demand for certain types of content. This ‘demand side’ of the disinformation challenge complicates the prevailing model of responses based largely on verification and debunking.

Supply-side analyses of disinformation typically begin with understanding the dissemination of false, misleading, divisive, or inflammatory content. This content reaches audiences incidentally as it moves through the media ecosystem. It is sometimes amplified by manipulation of mainstream media sources or by journalistic mistakes. Misled or confused, the audience then doubts accounts from authoritative sources. During moments of crisis or during rapidly unfolding events, users may accidentally share disinformation out of a desire to help or, in a moment of fear or anger, because they do not realize they have been misled.

These assumptions concerning the spread of disinformation may hold true for many people much of the time. However, some subset of the population consistently consumes and shares disinformation. These individuals may be invested in the narratives supported by disinformation campaigns. Their worldview or sense of self may also lead them to believe certain sources or stories over others. In some instances, their reliance on the new, hyper-digitized, freewheeling information environment may lead them down paths of paranoia and radicalization that were more difficult to discover and access before the internet’s advent. Such factors make up the ‘demand side’ of the disinformation challenge.

 

Disinformation’s Cognitive Drivers

Why do people consume and share misleading, incendiary content? The answer depends heavily on the individual. Different people have different understandings of the same content, and varying reasons for consuming and sharing it.

Still, observed patterns in human cognition point to reasons an individual might be susceptible to disinformation and seek it out. Research into confirmation bias—also called motivated reasoning—demonstrates that individuals often betray an unconscious preference for information consistent with preexisting beliefs. This preference is a feature, not a bug, of human cognition: from an evolutionary standpoint it is often a more efficient way to forge social consensus and determine a course of action.

By manipulating preexisting points of division and distrust within their target audience, disinformation campaigns can take advantage of confirmation bias to undermine social trust, discredit authoritative sources of information, peddle falsehoods, and render truth more difficult to discern.

News consumers who interact with disinformation as part of a motivated reasoning process are sometimes described as living in a “post-fact” world. Others warn of “truth decay,” marked by declining social agreement on the value of factual information over opinion. As a sign of growing public concern, the Oxford Dictionaries in 2016 named “post-truth” its international word of the year, defining it as, “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” (The German Language Society, or Gesellschaft der Deutschen Sprache, reached a similar decision.) Recent analysis details how a growing number of political movements around the world have exploited diminishing respect for facts and expertise for political gain.

This poses challenges for responses to disinformation based on fact-checking or debunking, and may diminish the effectiveness of media literacy and education as blanket solutions. Highly educated consumers may become more powerful motivated reasoners, better able to justify their preexisting convictions and adhere to an ideological or partisan line.

By manipulating preexisting points of division and distrust within their target audience, disinformation campaigns can take advantage of confirmation bias to undermine social trust, discredit authoritative sources of information, peddle falsehoods, and render truth more difficult to discern.

Understanding Media Consumer Motivation

To understand the role of disinformation in today’s media ecosystem, analysts must take into account individuals’ motivation for newsgathering. Why do audiences choose to consume the content they do? First, it is to obtain accurate information about the world around them, which they use to make decisions and inform their opinions. Yet this is not the only, or even the most common, factor.

Users can and often do seek out and share misleading content for emotional or ideological validation. According to one study, “defense-motivated” consumption, guided by the desire to validate one’s worldview, may in fact be the default for media consumption. Participants in the study were no more likely to consume balanced information when rewarded for accuracy than when rewarded for justifying their preexisting opinions. This suggests an inability to distinguish between attitude-consistent content and accurate information.

Other consumers may be motivated by the desire to make an impression on others.  This especially applies to the type of content social media users share. A surprising number of users have admitted to sharing information they know or believe to be untrue. While these users reported a variety of reasons (including humor), the desire to signal one’s political stances to others is one potential explanation.

This body of research suggests that media consumer identity, the need for validation, and emotional arousal are key components of the disinformation crisis, and are easily manipulated. They help explain disinformation’s rapid spread through social networks: rumors, hoaxes, and incendiary content generally spread more quickly than journalists and fact-checkers are able to react. Disinformation’s reach relies on media consumers’ predilection to share content that generates fear, anger, or disgust. In the words of one report, “angry messages are more persuasive to angry audiences.” In some cases, disinformation is amplified by automated ‘bot’ accounts, although one recent study contends that the spread of misleading content is more often attributable to humans, who do not need robotic assistance to share information that is not factual or authenticated.

Media consumer identity, the need for validation, and emotional arousal are key components of the disinformation crisis, and are easily manipulated.

The Role of the Internet

Many explanations for disinformation’s increasing threat to democracy and public discourse refer to the transformational power of the internet and social media. There are multiple explanations for the role of social media and major internet platforms as drivers of vulnerability to disinformation, none of which are mutually exclusive.

Many observers point to internet platforms’ control over news distribution as a key factor in this process. As digital advertising eclipsed other sources of newspaper revenue, Facebook and Google—which earn revenue by algorithmically targeting ads to users—became major sources of readership for news outlets, while simultaneously capturing the majority of all online advertising revenue. The result has been a precipitous decline in revenue for most news organizations, leaving the media sector vulnerable to capture and the quality of public discourse diminished.

Another factor is the internet platforms’ assumption—consciously or not—of the news media’s gatekeeping role, through which they influenced which stories came to public attention. While the emergence of social media has allowed new, diverse voices a more prominent role in the public square, it has also handed a tremendous amount of influence to the proprietary, non-transparent algorithms of the major internet platforms. Because these algorithms are designed to generate ad revenue, they privilege content based on factors other than value to the public sphere. The larger these algorithms’ role in selectively delivering news, the more privileged “click-worthy” content becomes—in many cases regardless of veracity or trustworthiness. This creates a perfect storm for disinformation, which is highly engaging by design.

A third argument positing a connection between social media and disinformation focuses on “selective exposure”—the ability of users to sort themselves into homogenous ‘echo chambers’ or ‘filter bubbles’ devoid of content contradicting their preexisting views. Social media platforms do not merely enable users to pursue selective exposure; their revenue model is built on audience hyper-segmentation, allowing for targeted advertising and algorithmically driven selective exposure by default. The end result is often said to be increased consumption of hyper-partisan or ideologically biased media, facilitating polarization, diminishing political dialogue, and increasing vulnerability to politically motivated disinformation.

 

Emerging Evidence on Echo Chambers

Yet evidence from recent research suggests a complicated relationship between social media, selective exposure, and disinformation. While past studies have blamed internet-driven selective exposure for increasing political polarization, more recent studies cast doubt on the notion that social media is a primary driver of polarization. Attempts to explore the relationship between internet use and polarization must also account for the role of users’ offline media consumption, the signals sent by political leaders, and other offline societal variables.

Complicating matters further, the research on echo chambers is also contradictory. While some studies have identified social media echo chambers around specific issues and events, others suggests that most users’ online media diets are diverse and largely centrist. The overall literature also suggests that partisan news consumers are driven more by affinity for likeminded content than by avoidance of opposing views.

Search engines and their algorithms also play a role on the ‘demand’ side. While some users assume these elements of the modern information space are neutral conveyors of unbiased information, they are not. For instance, one study of ideological partisans found its subjects unaware of the manner in which Google search results are returned: they assumed that the topmost results represented a cross-section of the most relevant, credible news sources and failed to consider the impact of Google’s search algorithm on their results. The same study found that using different—sometimes only slightly different—phrasing in searches about controversial issues led Google to surface results from radically different viewpoints. This confirms what other research has found: that the need for greater public understanding of the role of algorithms in content curation is acute, as is the need for greater algorithmic transparency on the part of internet platforms.

A more nuanced model of the relationship between the internet and polarization is emerging. It suggests that while most individuals consume little news media and encounter content from a range of viewpoints online, a subset of heavy media consumers frequently engages with hyper-partisan content and is disproportionately active in online discussions. Like most users, they seem primarily motivated by attraction to likeminded content, not avoidance of contrasting viewpoints. These voracious, outspoken, motivated media consumers may enjoy significant influence over the views of their less politically engaged peers, helping spread ideologically charged narratives and drive polarization. In this way, the internet—which has diminished the power of legacy media gatekeepers and further empowered peer networks to mediate the spread of information—may still play a role in priming the information space for hoax, rumor, and disinformation.

The overall literature suggests that partisan news consumers are driven more by affinity for likeminded content than by avoidance of opposing views.

No Simple Solutions

If media consumer demand is helping propel the disinformation crisis, this suggests the need for more nuanced countermeasures than are currently being implemented. In the future, effective solutions may appear less comprehensive, but rather iterative, adaptive, and targeted at the motivations of specific groups of media users. Better understanding of disinformation’s demand side should help refine responses to counter it.

 

Brief prepared by Dean Jackson, International Forum for Democratic Studies.

 

FOR MORE “ISSUE BRIEFS,” CLICK HERE.


FOR MORE FROM NED’S INTERNATIONAL FORUM ON THIS TOPIC, SEE:

Distinguishing Disinformation from Propaganda, ‘Fake News,’ and Misinformation” and “How Disinformation Impacts Politics and Publics,” two previous International Forum issue briefs in this series.

The Lie Is Not the Story: Practicing Journalism In the Disinformation Age,” and “The Disinformation Crisis and the Erosion of Truth,” two Power 3.0 blog posts by Dean Jackson.

Dean Jackson’s February 2018 Q&A with Maria Ressa on “Digital Disinformation and Philippine Democracy in the Balance.”

Fighting Fiction: Countering Disinformation Through Election Monitoring,” a Power 3.0 blog post by Julia Brothers.

Can Democracy Survive the Internet?” an April 2017 Journal of Democracy article by Nathaniel Persily.

The Authoritarian Capture of Social Media,” by Peter Kreko for the Power 3.0 blog.