Big Tech must step up now to fight misinformation in the medium term

0

We’re just four months away from the 2022 midterm elections, and more than 100 candidates across the country have embraced Trump’s “big lie.” These candidates not only claim the 2020 race was rigged, but also cast doubt on the legitimacy of the upcoming November election.

In 2020, allegations of voter fraud have spread widely on social media. President Trump regularly tweeted election lies and groups used Facebook to coordinate the January 6 insurgency. So far, however, reports indicate that social media companies may be unprepared for the coming onslaught of election misinformation.

As Facebook pivots to focus on the metaverse, for example, the enterprise reduced the number of staff focused on election integrity from 300 to 60. Experts worry that this lack of resources and attention, combined with the scale of the midterm elections, could exacerbate the problem. Indeed, internal research shows that Facebook struggles to detect misinformation in local environments, like those seen mid-stream.

Instead of withdrawing electoral integrity measures, platforms should strengthen their electoral safeguards. As scholars who study the intersection of social media, politics, and democracy, here are four questions we’re watching.

How will social networks respond to threats to democratic legitimacy?

Right now, a faction of the Republican Party has decided that election results – at least when they lose – are not legitimate. As a result, platforms must not only think about how to moderate election misinformation, but also how to deal with candidates who question the legitimacy of the process itself.

Platforms have many ways to moderate misinformation. Research shows they all work, and no, to varying degrees. For example, several studies indicate that fact checks can reduce belief in misperceptions. However, these effects may fade over time. Another study found that attaching a warning label or blocking engagement with Trump’s 2020 election disinformation tweets was not linked to a reduction in its spread, both on Twitter and other platforms. And while recent work shows that precision nudges decrease belief and the sharing of misinformation, it remains to be tested at scale across platforms.

Beyond content, the platforms also have to contend with users spreading election lies, many of whom are political candidates. With the exception of Trump, companies have been largely reluctant to ban candidates who post misinformation. This is because high-profile users, such as celebrities and politicians, are essentially immune to Facebook’s content moderation rules.

There is no silver bullet to stop misinformation on social media. Instead, platforms must work together to use a variety of tools to slow its spread, fairly and equitably punish users who repeatedly violate the rules, and maintain trust by supporting open democratic discourse. The European Union’s new anti-disinformation code, which several platforms voluntarily signed up to in June, is an encouraging start.

How will companies prevent extremists from organizing on their platforms?

Social media does not have a monopoly on the dissemination of anti-democratic content. In fact, Harvard’s Berkman Klein Center found that the 2020 election misinformation surrounding mail-in voting was an “elite-driven, mass media-led process.” However, social sites remain a privileged place where groups – both pro-social and anti-democratic – can coordinate and mobilize. Classifying and moderating unauthorized content is difficult; restricting the ability of groups to mobilize is even more difficult, as the content of small, closed groups can cause outsized harm.

So far, there have been some notable failures. Prior to January 6, Facebook banned the leading group “Stop the Steal” for language spreading hate and inciting violence. However, they did not stop similar groups, which experienced “skyrocketing growth”. Overall, a 2021 scan found 267 pages and groups, many linked to QAnon and militia organizations, “with a combined following of 32 million, running content glorifying violence in the heat of the rage. 2020 election”.

These groups on Facebook — and other platforms — were instrumental in coordinating Jan. 6. With so many candidates still talking about rigged elections, we may see more violence after the upcoming midterm elections. Social platforms should do everything in their power to disrupt these groups and prevent extremists from organizing violence.

What about video?

For years, social media was largely text and image-based platforms. Now video dominates. TikTok, with over a billion monthly active users, is one of the most popular social networks. YouTube, the second most visited website after Google, continues to be under-researched. And even Facebook, once a place designed to connect with family and friends, is now focusing on short-form video.

Platforms have struggled to create artificial intelligence systems to moderate textual content at scale. How are they going to deal with multimodal disinformation – shared in the form of images, videos and audio? Reports suggest misinformation is rampant on TikTok, particularly around COVID-19 vaccines and the Russian invasion of Ukraine. YouTube has done a better job tweaking its algorithm to exclude potentially harmful videos. But as the race heats up, this is a critical area to focus on.

Will the platforms share their data?

While we’ve come a long way in our understanding of these networks, it’s hard to really know what’s going on without access to more social media data. Access currently varies widely by platform.

Facebook’s CrowdTangle tool helps us look at content engagement, but researchers fear it could be taken down at any moment. Twitter has been an industry leader in access to data, but the ongoing purchase of Elon Musk puts that access in doubt. Meanwhile, TikTok and YouTube share very limited data and are largely closed to journalists and researchers.

There are currently several proposals in Congress that would secure researchers’ access to data, and the EU has just passed landmark rules regulating Big Tech. While it’s too late for these bills to make data accessible around this election cycle, they are promising developments for the future.

Of course, social media is not solely responsible for the current state of our democracy. Larger societal forces, including a fragmented media environment, geographic triage by partisanship, and partisan gerrymandering, have contributed to polarization in recent decades. But social media can often act as an accelerator, exacerbating our institutional shortcomings.

When it comes to midterms, we hope social media leaders are concerned about threats to our democracy — and have or will develop comprehensive plans to help protect the electoral process.

Zeve Sanderson is the executive director of NYU Center for Social Media and Politics (CSMaP). Joshua A. Tucker is one of the co-founders and co-directors of NYU Center for Social Media and Politics (CSMaP). He is Professor of Politics, Affiliate Professor of Russian and Slavic Studies, and Affiliate Professor of Data Science at New York University, as well as Director of NYU’s Jordanian Center for Advanced Russian Studies. He is co-editor of the edited volume “Social media and democracy: the state of the field», and the co-chair of the independent university research team on the 2020 Facebook and Instagram US Election Research Study.

Share.

About Author

Comments are closed.