Forum Q&A: Lisa-Maria Neudert on the Road Ahead for Civil Society Responses to Disinformation

TWEET SHARE | PDF

As the disinformation challenge has evolved, so has the landscape for civil society organizations (CSOs) combatting its impact on democracy. In a working paper entitled “The Road Ahead: Mapping Civil Society Responses to Disinformation,” Lisa-Maria Neudert and Samantha Bradshaw analyzed the challenges CSOs face in this complex and fast-changing field—and how funders and democracy support organizations can better support their future growth.

Dean Jackson of the International Forum spoke with Lisa-Maria, a doctoral candidate at the Oxford Internet Institute and a core researcher at the Computational Propaganda Project, about this working paper and its recommendations.


DEAN JACKSON: In the paper, you categorized dozens of initiatives responding to the disinformation challenge by type of activity and geographic focus. What did you find most noteworthy about the results?

LISA-MARIA NEUDERT: Throughout the process of producing this report, my co-author and I were consistently impressed with the scale of the civil society landscape. In total we mapped out a network of 175 CSOs that are working on issues of mis- and disinformation. This landscape, and social media itself, are still fairly nascent, but still we found dozens of sophisticated organizations focusing on particular aspects of the problem in different regions and political contexts.

We were also impressed with how diverse the landscape is. Some civil society organizations are working on fact-checking and credibility, while others are working on media literacy education or journalistic support. Of particular interest were a host of organizations developing and using technology themselves, including artificial intelligence. This approach was more common in Western Europe and the United States, where funding is more abundant.

 

Funding was related to several challenges that CSOs raised in your survey and interviews. How would you summarize the main obstacles facing CSOs in this field today?

There were three major concerns. The first was funding, and especially the lack of flexible funding and hesitancy among funders to support new or experimental initiatives that differ from tried-and-true approaches.

The second concern was insufficient access to quality data. Every single person that we have surveyed flagged this as a major challenge to their work. Sometimes the data is completely inaccessible. Sometimes access is volatile, and so the type of accessible data changes. Inequity is another problem: CSOs in Western Europe and the United States often get privileged access to data when compared with those in regions or markets where the big tech companies have less presence.

On this point, my colleague at the Oxford Internet Institute, Mona Elswah, produced an interesting paper on the Arab world and access to CrowdTangle, a Facebook-owned tool for measuring the popularity of posts on the platform. She found that many organizations studying the Arab world, and more specifically the Tunisian elections in 2019, could not get access to CrowdTangle because they lacked either the necessary contacts or credibility with Facebook.

The third concern was duplication or redundancy of efforts. Several CSOs reported studying a certain type of election or social media network only to discover that there was another organization getting funding for doing the same work. Obviously, there’s always nuance in how analysis is being conducted, and different researchers may use slightly different methods, but CSOs themselves are worried that there may be too many redundant efforts. Sometimes it’s helpful to have many of the same people working on the same thing, especially if it helps establish any kind of review system to improve the work. But in general, if funders encouraged more formal or informal coordination between themselves and between CSOs, it might improve the efficient allocation of resources.

Inequity is another problem: CSOs in Western Europe and the United States often get privileged access to data when compared with those in regions or markets where the big tech companies have less presence.

Can you go into more detail about how funders can improve the sustainability and efficiency of CSO efforts?

Despite the explosion of disinformation and social media algorithm research, there are still so many questions that have not been studied at all. There are entirely new platforms that are understudied. Many contexts and issues are not well-represented by interest groups and, as a result, don’t receive due attention. Increased funding targeted at underserved areas is essential.

In terms of improving sustainability, funders should consider more flexibility and risk tolerant approaches. This means empowering organizations to make decisions about what to study and when.

For example, one surveyed expert said they received funding to study a specific network’s impact on an election. Through their research, they discovered that a different network was more responsible for producing and generating disinformation; but because they had that commitment to the funder and the funder was not flexible, they had to focus on the first network when it would have been more impactful to focus on the second.

Flexibility, the way we describe it in the paper, means giving more decision-making power to CSOs. It also requires funders to have a higher tolerance for experimentation and, ultimately, risk. If funders empower organizations to set flexible agendas, the result may be a study that is not completely in line with their initial expectations. This is a risk, but it’s one that can be managed through repeat conversation between funders and grantees about a project’s scope and direction.

In a way, COVID-19 has been a bit of a wake-up moment because it revealed so many urgent funding needs. Quite a few of organizations have either changed their funding allocation or made new funding available in response to issues raised by the pandemic, like vaccine hesitancy, which required a quick response by CSOs. This helped normalize for funders and other organizations that the disinformation landscape is quickly evolving, issue-based, and increasingly multi-faceted with new topics, new actors, and new platforms constantly emerging.

In terms of improving sustainability, funders should consider more flexibility and risk tolerant approaches.

The emergence and growth of new platforms is really important. In the paper, you note that researchers and their funders overwhelmingly concentrate on Facebook and Twitter, which are very large but not always the most important platforms.

Absolutely. For example, TikTok is an important, understudied new platform. People are still learning how to study it, and data is not readily available.

There’s also encrypted messaging applications like WhatsApp and Telegram, which are difficult to study but still very important because the one-to-one messaging space is where many conspiracy theories originate from before being carried into bigger spaces.

Funders should encourage more research into underexplored spaces and questions like these. Even on Facebook, there are many questions that are understudied, such as the effect of Facebook groups that are less visible to the public. Funders should work with experts to identify opportunities to shed light on underexplored issues.

 

What are your thoughts on the relationships between CSOs and the platforms, which are sources of both funding and data for social media research?

Organizations in Western Europe and North America often describe positive experiences with platforms, which have well-staffed, dedicated teams working on policy and with CSOs in those regions. Governments in those places also often have pending regulation, so platforms are typically quite invested in understanding and addressing issues there. I’m not saying they’re not doing this in other geographies as well, but over the years we have come to understand that there are regions that are under-represented. Myanmar is one example of a hot spot for social media manipulation which suffered for lack of platform staff and content moderation over the years.

Though there are some important issues around data access and availability, the Facebook Advertising Archive is an example of how different degrees of platform engagement play out in practice. In the run up to its creation, CSOs actively advocated for the type of data that the Archive made available. This kind of influence requires personal relationships with platforms, and those relationships are hard to build for CSOs without a dedicated point of contact. As a result, CSOs in countries without a robust platform staff presence have a harder time accessing data, even if it is supposed to be available to them.

Even when data is accessible, access is often volatile and inconsistent. The availability of data can change from month to month and even from week to week—meaning a study that is supposed to go on for three months may lose access to the necessary data before its completion. The format of data is another challenge: the biggest example of this that we use in the paper was when platforms shared data about election interference with the U.S. Senate Select Committee on Intelligence in PDF format, which meant it was not machine readable and was very difficult to use for any kind of statistical analysis.

Finally, it’s often unclear on what basis CSOs can apply for data access through various initiatives. Important research about disinformation comes from key CSOs, but certain platforms and initiatives privilege university research over CSO research. There are some good reasons for this: research teams based at universities have approval processes and ethical standards they must pass. But at the same time, both sides lose out on so much innovation because the researchers at the leading edge of this issue are often not at universities.

CSOs in countries without a robust platform staff presence have a harder time accessing data, even if it is supposed to be available to them.

So, the platforms can often choose what types of data are available, for how long, and to whom—this seems like a lot of power that they hold over the research agenda. Are you concerned about that?

The dynamics of these arrangements already create a lot of dependence, including dependency on funding from big social media networks. I’m not saying they should stop funding—it’s good to have things like the Google News Initiative and to see Facebook providing money to CSOs. But when firms control the data, the funding, and who gets access to both, that creates dependencies and the potential to shape research. It is quite concerning in many ways.

 

If researchers had independence in the form of unfettered data access and flexible, sustainable funding, what types of research questions and what types of interventions would be possible?

There are still so many unanswered questions around mis- and disinformation. We’ve only seen certain types of data and there are still areas of platforms that are completely understudied. We don’t yet know enough to say what the real challenges are. For example, what’s the most prevalent type of mis- or disinformation content? How much of that content gets flagged, moderated, and taken down directly?

At the same time, we also understand some parts of the problem well enough that meaningful, evidence-based policy intervention is possible. When it comes to research, one of the big things that we could do is analysis over time. How are things evolving? What is the impact of policy interventions, by which I mean both policies coming from governments but also policies coming from social media companies themselves? For example, around the 2020 U.S. election, Facebook made the choice to make authoritative media more visible in users’ newsfeeds. What was the impact of that? What types of information were people seeing more from, what were they seeing less from? Did certain pages receive less engagement?

Social media platforms have the data to answer those questions. They can run analysis on them internally and then choose whether to share the results with the outside world. Every now and then an employee leaves one of the platforms and gives the rest of us a window in. It would be better if independent researchers were doing that work and making it publicly accessible. At the end of the day, it comes back to basic questions of data access.

 

Lisa-Maria Neudert is a doctoral candidate at the Oxford Internet Institute and a core researcher at the Computational Propaganda Project, where her work is located at the nexus of political communication, technology studies, and governance. Her current research is looking into the public and private governance of policy issues surrounding disinformation through governments and social media platforms. Follow her on Twitter @lmneudert.

This interview has been condensed and edited for clarity. The views and opinions expressed here do not necessarily reflect those of the National Endowment for Democracy.

 

Read MORE “FORUM Q&AS.”


FOR MORE FROM NED’S INTERNATIONAL FORUM ON THIS TOPIC, SEE:

A working paper by Samantha Bradshaw and Lisa-Maria Neudert on “The Road Ahead: Mapping Civil Society Responses to Disinformation.”

How to Help Civil Society’s Disinformation Researchers Flourish,” a Power 3.0 blog post by Rachelle Faust and Daniel Cebul.

Dancing in the Dark: Disinformation Researchers Need More Robust Data and Partnerships,” a Global Insights essay by Renée DiResta.

A Forum Q&A with Kelly Born on “The Evolving Field of Disinformation Research.”

 

Share