Evaluating Platform Election-Related Speech Policies

Both the blog post and the attached PDF have been updated as of Monday Oct. 28, 2020. Because rapidly-evolving policies have necessitated substantial changes to the text of the post, we have archived past versions after each update, which are linked as PDFs to the bottom of this post. By doing this, we hope to preserve a policy archive and show the evolution that has occurred during this short period of time. 

With 6 days until the election, this blog post reflects our current analysis of election-related policies of 15 different social media platforms — we recently added the streaming platform, Twitch. Since first publishing our analysis on August 18, 2020, this post has been updated four times to reflect recent policy changes by Facebook, Twitter, Pinterest, Snapchat, YouTube and most recently Nextdoor and TikTok. These changes include new policies that address content that delegitimizes election results, a specific type of election-related content we have previously highlighted as important to election integrity. Additionally, several platforms have clarified language around how their policies will be applied. These are constructive updates, and we hope platforms that lack policies to address election-related content will find the counterpart policies inspiring.

Election-related misinformation and disinformation on social media can have a significant impact on political behavior and on citizens’ trust in election outcomes. There is growing consensus among the social media platforms, e.g., Facebook, Twitter and YouTube, that election-related misinformation and disinformation require special attention — and, in some cases, action — from content, trust and safety, and policy teams. In the past year, several platforms have updated their policies and expanded their definitions of election-related content: Facebook expanded its voter and/or census interference policy; Twitter introduced a Civic Integrity Policy; and TikTok, Nextdoor and YouTube updated policies on election interference, election misinformation and election-related content, respectively. And in just the last two months, platforms have made significant updates: in that time, Facebook and Twitter have updated their policies twice, Snapchat introduced election-related policies for the first time around mid-September and Pinterest, TikTok and Nextdoor have introduced new policies as recently as October 9.

In this blog post we examine the election-related policies of 15 different platforms and assess the extent to which they address the threats stemming from election-related misinformation and disinformation. We also test these policies against scenarios focusing on two important aspects that have been, and will likely continue to be, especially problematic in the 2020 election: false claims aimed at delegitimizing election results and posts that may lead to physical confrontations at polling places. Finally, we briefly discuss the challenges of applying these policies in practice. The attached PDF, “Platform Policies,” contains a detailed breakdown of how each platform’s policies fit into our framework and how we arrived at our conclusions.

Key Takeaways — As of October 28, 2020

  • Since publishing on August 18, 2020, we have seen significant policy updates from Facebook (Instagram), Twitter, YouTube, Pinterest, TikTok and Nextdoor. Snapchat also updated its policies by adding a sentence that specifically addresses claims meant to “undermine the integrity of civic processes” — introducing an election-related policy for the first time.  

  • Of the 15 platforms we have reviewed, updates to policies mostly came from platforms that already had policies surrounding election-related content, with the recent exception of Snapchat. Platforms that did not have election-related policies — Parler, Gab, Discord, WhatsApp, Telegram, Reddit and Twitch — did not update their community standards policies to address election-related content. 

  • We have defined three core categories of election-related misinformation and disinformation — Procedural Interference, Participation Interference and Fraud. Using this framework, we classify all election-related policies from six major platforms and rate the platform policies as “None,” “Non-Comprehensive” or “Comprehensive.” When we first published on August 18, we found that only a handful of platforms have comprehensive policies. Since these policy updates, Pinterest, TikTok and Nextdoor have some type of policy in one of the three categories for which they previously did not have one. 

  • A fourth, broader core category of election-related content that aims to delegitimize election results on the basis of false claims poses significant problems for all of the platforms we analyzed. In August 2020, none of these platforms had clear, transparent policies on this type of content. However, as of October 2020, Facebook, Twitter, Pinterest, Nextdoor and TikTok have updated their platforms to address this type of content.

  • We’ve updated our analysis to include scenarios that specifically address potential confrontations at polling stations, where posts call to action or mobilize unauthorized poll watchers. Our three scenarios vary in severity, ranging from a general call to action to one that specifically asks for armed individuals. We found that of the seven platforms that have election-related policies, three platforms’ policies address content that may cause incitement to interfere in voting operations through violence. 

  • In addition to policies that remove content, platforms such as TikTok and Twitter have introduced more policies that promote friction and change curation, policies that aim to curb the spread of misinformation and limit virality. 

  • Enforcing policies that prohibit “misrepresentations” or “misleading” content of various kinds will require platforms to know the facts on the ground. Policies seeking to address falsifiable delegitimizing content will require platforms to make difficult judgment calls about political speech. There is no panacea for election-related misinformation and disinformation. But well-articulated, transparent policies will make it easier to mitigate threats to the election and will improve user confidence in enforcement decisions. 

Table of Contents


Four Categories of Election-Related Misinformation

The first step to mitigating the impact of election-related misinformation and disinformation is understanding the current policy landscape: what election-related policies are in place at the social media companies? What shared concepts are these policies based on? What potential vulnerabilities remain? While it is clear that the process of crafting policies to confront misleading election-related content is ongoing, broad areas of agreement have emerged about what kinds of content need to be addressed. 

Based on our analysis of the major platforms’ policies, we have defined four core categories of election-related content, listed below. Three of these concern specific harms related to voting procedures and voter suppression. 

  • Procedural Interference: Misleading information about the actual election procedures. Content directly related to dates and components of the voting process that prevents people from engaging in the electoral process. For example:

    • Content that misleads voters about how to correctly sign a mail-in ballot. 

    • Content that encourages voters to vote on a different day.

  • Participation Interference: Related to voter intimidation. Content that deters people from voting or engaging in the electoral process. For example:

    • Content that affects the desire or perceived safety of voters engaging in the electoral process.

    • Misleading information about the length of lines at a polling station, to deter in-person voting. 

    • Misleading information about a supposed COVID-19 outbreak in an area, to deter in-person voting.

    • Allegations of a heavy police presence or ICE agents at polling stations.

  • Fraud: Content that encourages people to misrepresent themselves to affect the electoral process or illegally cast or destroy ballots. For example:

    • Offers to buy or sell votes with cash or gifts.

    • Allegations that pets are mailed ballots to cast a vote.

  • Delegitimization of Election Results: Content aiming to delegitimize election results on the basis of false or misleading claims.

As we show in the attached document, these four categories are adequately covered by the larger platforms’ policies, and broad agreement about the lines between permissible and impermissible election-related content makes it easier to classify and address content falling into these categories. The fourth category, Delegitimization of Election Results, was recently incorporated into some of the platforms’ policies after we published our blog post in August. 

This should not be taken to mean that there are clear lines here, or that addressing issues stemming from these kinds of claims is as simple as applying a rubric. This area of election-related content, more than others that can be more clearly delineated, will require “judgment calls,” even with specific, granular definitions in place. We discuss this problem below in the context of a comparison of platform policies.

Comparing Platform Policies

How do popular platforms address these broad categories of election-related content? That answer ranges from comprehensively to not at all. We compared the community guidelines of 15 platforms: Facebook, Twitter, YouTube, Pinterest, Nextdoor, Parler, Gab, Discord, WhatsApp, Telegram, Snapchat, TikTok, Reddit, Instagram, and Twitch. Our choice to look at these platforms in particular was guided by two criteria: (1) platforms that are the most popular U.S. social media platforms by user base or (2) platforms that market themselves as political forums. 

We rated each platform’s policies as either “None,” “Non-Comprehensive” or “Comprehensive,” depending on how specifically it addresses the content type (see the attached PDF for a detailed assessment of each platform’s election-related policies):

  • None: The platform has no explicit policy or stance on the issue.

  • Non-Comprehensive: Policy in this category contains indirect language, or uses broad “umbrella” language, such that it is not clear what type of election misinformation and disinformation the policy covers. This is also reserved for policies that give one detailed example such that they cover some, but not all, of a subject.

  • Comprehensive: Policy in this category uses direct language and is clear on what type of election misinformation and disinformation the policy covers. It also sufficiently covers the full breadth of the category.


Seven platforms address three categories — Procedural Interference, Participation Interference and Fraud — in some manner: Facebook, Twitter, YouTube, Pinterest, Nextdoor, TikTok, and most recently Snapchat as of mid-September. The scores for these platforms can be found in the chart below. Six platforms — Parler, Gab, Discord, WhatsApp, Telegram, Reddit, and Twitch — do not have election-related policies at all. Instagram’s policies are ambiguous since it is not clear whether Facebook (which owns Instagram) applies the same policies uniformly across both platforms. The chart below has been updated several times to reflect recent policy announcements by the platforms. Previous ratings and justifications for revised evaluations can be found in the attached PDF, along with the corresponding policy language.

Figure 1: UPDATE: This chart was edited on October 26 2020, to reflect policy changes from YouTube. The evaluation of the policies by platform was informed by their community guidelines and standards linked here: Facebook, Twitter, YouTube, Pinteres…

Figure 1: UPDATE: This chart was edited on October 26 2020, to reflect policy changes from YouTube. The evaluation of the policies by platform was informed by their community guidelines and standards linked here: Facebook, Twitter, YouTube, Pinterest, Nextdoor, TikTok and Snapchat. For more detail about each platform’s policies and justification for each evaluation, see the attached PDF at the end of the blog post.

Though comprehensiveness of these policies isn’t a guarantee of their effectiveness or of their consistent enforcement, we have nonetheless chosen to focus on this aspect for our assessment as a way to help inform our outreach to platforms and escalation channels in our monitoring work. This breakdown has been an effective tool to communicate to platforms about non-comprehensive policies and also to the general public about significant policy changes we have seen. The next section dives deeper into the fourth category of election integrity: Delegitimization of Election Results. 

Scenario Testing: Delegitimization of Election Results

The fourth category of election integrity is an especially important category as it poses more difficult content-moderation issues. There is a spectrum of possible claims intended to delegitimize the election — some authentic, some disingenuous — and deciding which kinds of claims have the potential to cause specific harm is difficult. While broad claims about the legitimacy of the election, such as “it’s all rigged” or “the system is broken,” are a part of normal political discourse, such claims become highly problematic when they are made on the basis of misrepresentation — such as a misleading video or photo. The former is something Americans might see and hear every day; the latter has the potential to cause political instability if it is combined with coordinated inauthentic behavior or goes viral. In the chart below we lay out a matrix of potential scenarios to understand these claims and their potential to cause harm.

Figure 2: The three scenarios represent different statements with varying levels of “evidence” to support each claim. Scenario 4 has been added to this analysis as of September 11, 2020.

Figure 2: The three scenarios represent different statements with varying levels of “evidence” to support each claim. Scenario 4 has been added to this analysis as of September 11, 2020.

These scenarios should not be taken to mean that there are clear lines differentiating different types of content, or that addressing issues stemming from these kinds of claims is as simple as applying a rubric. This area of election-related content, more than others that can be more clearly delineated, will require “judgment calls,” even with specific, granular definitions in place. Additionally, our scenarios may not capture all dimensions or cases of delegitimization on the platforms: for example, YouTube’s policy includes two examples that could fall under delegitimization: 1) “False claims that non-citizen voting has determined the outcome of past elections” and 2) “Telling viewers to hack government websites to delay the release of election results.” However, like other platforms, YouTube’s policy does not specifically address delegitimization of election results as a category of itself.

Below is our analysis of how platform policies address these scenarios related to the delegitimization of election results. We do not believe that all of these posts, especially ones similar to the non-falsifiable Scenario 1, should necessarily be taken down, limited or otherwise acted upon. We do believe, however, that it is important for the platforms to provide predictability as to what their policies intend to cover. 

We have also updated this chart as of October 14 to include a fifth column that states whether platforms have specified their authoritative sources for calling election results. As policy experts such as Evelyn Douek have advocated in favor of, we believe it is important to be transparent about which sources the platforms will rely on in order to make decisions in cases such as Scenario 4, where candidates may claim premature victory.

Figure 3: UPDATE: This chart has been updated as of October 28, 2020, to reflect a recent announcement from TikTok. An additional fifth column was added to address whether the platforms name their authoritative source for determining when the electi…

Figure 3: UPDATE: This chart has been updated as of October 28, 2020, to reflect a recent announcement from TikTok. An additional fifth column was added to address whether the platforms name their authoritative source for determining when the election results are called. The evaluation of the policies by platform was informed by their community guidelines and standards linked here: Facebook, Twitter, YouTube, Pinterest, Nextdoor, TikTok, and Snapchat. For more detail about each platform’s policies and justification for each evaluation, see the attached PDF at the end of the blog post.

Already, our EIP monitoring team has seen posts from blue-checkmark users and highly visible politicians make claims about whether the election will be “rigged” or “fraudulent.” Of the cases our team has worked on, delegitimization is the most frequent category of election-related misinformation and disinformation content we come across — in the first four weeks of monitoring, about 40% of our cases have been related to delegitimization.

Scenario Testing: Policies Surrounding Physical Confrontations outside of Polling Places

This section of the post has been added as of October 14, 2020, to address heightened concerns related to calls for unauthorized poll watchers and other types of potential voter intimidation at the polls. These concerns were amplified multiple times by President Trump during the first presidential debate on September 29, such as his call to “go into the polls and watch very carefully.” Mobilization efforts have been similarly initiated by organizations such as a private company in Minnesota that is trying to recruit former U.S. military Special Operations personnel to guard the state’s polling sites, an effort not welcomed by Minnesota’s attorney general. 

Authorized poll watchers are a legitimate part of an election, but calls to mobilize unauthorized poll watchers could undermine election integrity. According to the National Conference of State Legislatures (NCSL), poll watchers, also referred to as “partisan citizen observers,” are meant to “ensure their party has a fair chance of winning an election.” States and counties where poll watching is allowed have specific regulations for appointing poll watchers, as well as what they can and cannot do. For example, in South Carolina, poll workers cannot directly challenge voters and must go through a manager.

The calls for unofficial poll workers may lead to voter intimidation. Organizations such as the Healthy Elections Project have prepared helpful memos such as a survey of state-level policies for addressing Election Day violence in 11 swing states. Similarly, Georgetown Law’s Institute for Constitutional Advocacy and Protection (ICAP) has two important fact sheets: The first lays out the different laws barring unauthorized private militia groups in all 50 states. The second fact sheet lays out federal laws protecting against voter intimidation. For example, according to this second fact sheet, “even where guns are not explicitly prohibited, they may not be used to intimidate voters.” 

The resources from Healthy Elections and Georgetown Law’s ICAP demonstrate that there are already laws in place to deal with physical confrontations. However, analysis from our EIP researchers shows general calls for poll watchers already propagating on social media. What’s more, the analysis shows that the amplification of these social media posts, such as the “Army for Trump” rally, can have negative secondary effects that can cause voter suppression just by amplifying these preliminary posts. Therefore, platforms must develop policies to address their role in potential mobilization efforts to intimidate voters at voting locations. 

The scenarios below evaluate whether platform policies address these specific concerns. The scenarios include posts that call for unauthorized poll watchers generally, unauthorized poll watchers for a specific location or armed unauthorized poll watchers. These posts can result in voter intimidation either by actually mobilizing unauthorized poll watchers or by spreading the perception of hostile unauthorized poll watchers at polling places. Overall, the most extreme instances of these scenarios often already fall under election integrity guidelines for some platforms. However, there are still large gaps these companies do not currently address.

Figure 4: The three scenarios represent different levels of voter intimidation and violence that may ensue at a polling location. These scenarios are used to test the platform policies, the results of which are shown in the next table, Figure 5.

Figure 4: The three scenarios represent different levels of voter intimidation and violence that may ensue at a polling location. These scenarios are used to test the platform policies, the results of which are shown in the next table, Figure 5.

Our rationale for determining if a policy meets the necessary condition to be labeled “Comprehensive” is as follows: For Scenarios 1 and 2, the policy must specify the platform will address posts that encourage people or call to action unauthorized individuals or a group, which includes unauthorized poll watchers, to interfere in the procedures or operations of a polling place. For Scenario 3, the policy must specify that it will address posts that include incitement to violence related to elections and voting. 

Figure 5: UPDATE: This chart has been updated as of October 27, 2020. The evaluation of policies by platform was informed by different platforms’ newsroom posts and community guidelines linked here: Facebook, Twitter, YouTube, Pinterest, Nextdoor Ti…

Figure 5: UPDATE: This chart has been updated as of October 27, 2020. The evaluation of policies by platform was informed by different platforms’ newsroom posts and community guidelines linked here: Facebook, Twitter, YouTube, Pinterest, Nextdoor TikTok, and Snapchat.

There are three main takeaways from the chart above: First, the recent policy updates from Facebook and Twitter, announced October 7 and 9 respectively, demonstrate that platforms are adapting their policies to address this specific type of voter intimidation. Second, we found that many platforms did not draw a distinction between specific calls to action and general calls to action when it came to posts that had the potential to disrupt operations at polling places. And lastly, three of the seven platforms specifically address content that may cause incitement to interfere through violence, as seen in Scenario 3.

It will be difficult for platforms to make judgement calls on more general statements. When deciding how to evaluate these policies, our team had multiple discussions about distinguishing factors between calls to action that specifically aim to mobilize extralegal poll watching and general concerns or encouragement for voters to be vigilant of fraud while at the polls. Factors such as the overall influence of the original poster, the implied misinformation claim or the specific mobilizing word choice may all constitute criteria for a call to action or incitement.

Policy Enforcement

Platforms have a variety of moderation responses at their disposal for addressing election-related content that violates their policies. Since 2016, Facebook, for example, has used a framework called Remove, Reduce, Inform. As its name suggests, the policy removes content that violates the platform’s policies, reduces the reach of problematic content that doesn’t violate its policies but can still be harmful, and informs users with additional information on the content they share and see on the platform. While other platforms have different rubric terms, these three types of intervention are common across the industry. The different enforcement options are also informed by the gravity of the infringement, the nature of the account posting the content and prior infringements made by the account posting the content.

One critical piece to the discussion of enforcement is the platforms’ established precedent — sometimes a result of deliberate policy, sometimes what seems to be ad hoc — that some content can be exempt from enforcement if the platform believes “the public interest in seeing it outweighs the risk of harm.” Facebook has codified this exemption in its “newsworthiness” clause, which allows content that would otherwise be in violation of Facebook’s policies to stand “if it is newsworthy and in the public interest.” Likewise, Twitter has created a “public-interest exception” exempting content that would otherwise violate its policies “if it directly contributes to understanding or discussion of a matter of public concern.” And most recently, in its newsroom post on October 7, 2020 detailing how it applies its policies to election-related content, TikTok defined a public interest exception to its policies for “something in which the public as a whole has a stake, and we believe the welfare of the public warrants recognition and protection.”  

In the context of Facebook and Twitter, we have seen more policy action applied to politicians and public figures whose content falls under the “newsworthiness” clause—in some cases applying a label to these posts on Facebook, or putting them behind a notice, as on Twitter. However, we have seen misleading content posted verbatim on both platforms, and enforcement is sometimes applied on one platform and not the other. This may be representative of different policies or different applications of those policies. Regardless, enforcement for these policies in these next few weeks will be important for setting regular, predictable interpretations of these important exceptions.


Friction and Curation

One of the real challenges in addressing misinformation and disinformation on social platforms is virality — the speed and reach of the content makes countering it after the fact almost impossible. However, a solution many researchers have advocated is for platforms to inject some friction for misleading content, to give fact checkers and labelers time to act. Friction can fall under both the “reduce” and “inform” interventions common across platforms, and can be tailored to the needs of the specific platforms. For example, on October 1, Twitter reported seeing “promising results” from a policy introduced in early June meant to inject friction: if a user retweets an article, a prompt may appear suggesting the user read the article first.

We are now seeing more platforms introduce these nudges. On October 9, Twitter again announced a series of changes that introduce friction on its platform. Among others, this includes encouraging people to use the Quote Tweet function instead of Retweet to “encourage everyone to not only consider why they are amplifying a Tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation.” On October 7, TikTok also introduced policies to create friction, including redirecting search results and hashtags that violate its Community Guidelines.

Friction is often a small design change in a user’s interaction with a platform. It may not always limit user options, but instead alter user behavior in a way that the platform finds improves the community overall. In relation to misinformation, it can be as simple as a prompt asking users to open an article before sharing it. This design change allows people to decide how to participate in political conversations, while, by encouraging people to pause or actually read the content they share, promoting more thoughtful behavior on the platform.

The second real challenge to addressing misinformation and disinformation is curation, or how platforms determine what and how content appears on a user’s screen. Rather than presenting content in reverse-chronological order, most platforms use algorithms that deliver content specifically tailored to engage the individual user. Since this is often high-engagement content, it may be a gateway for viral misinformation. 

Just as curation can be used to engage the user, it can also be altered to limit the spread of misinformation. Reducing the recommendation of that content, whether to individual users or on a “trending topics” page, or downranking content to the bottom of a newsfeed helps minimize its spread. Avaaz, a global advocacy group with a focus on online mis- and disinformation, calls this method “Detox the Algorithm,” meaning “to transparently adjust the platforms’ algorithm to ensure that it effectively downgrades known disinformation, misinformation, as well as pages, groups and websites that systematically spread misinformation.” This change limits the spread of misinformation, maintaining a user’s speech and expression while limiting the potential reach.

Conclusion

As the election is already in progress and November 3 is only a few weeks away, takeaways we initially introduced in August still ring true. First, combating online disinformation requires action to be taken quickly, before content goes viral or reaches a large population of users predisposed to believe that disinformation. We saw the importance of this when we tracked the propagation of one of President Trump’s misleading tweets through Twitter right before the Republican National Convention. 

Second, crafting comprehensive policies and enforcing them in a clear and transparent manner remains critical. There are and will continue to be legitimate concerns about voter safety and election integrity during this unprecedented election; users will post about COVID-19 hotspots, long lines or other intimidating scenarios that may deter citizens from voting. In practice, determining the line between posting “misleading” content and airing grievances — e.g., when a user shares an opinion or a prediction — could be a challenge. Difficulties striking this balance are not new. However, this does not diminish the stakes of getting it right. Content that uses misrepresentation to disrupt or sow doubt about the larger electoral process or the legitimacy of the election can carry exceptional real-world harm. 

As platforms, journalists and researchers prepare for the weeks ahead, we need to remember that each of our efforts ultimately have the same goal: to mitigate any attempts at undermining election integrity. We at EIP hope that platforms can work together when needed, build on the positive changes they’ve made to their policies, and recognize, as we move into a potentially uncertain period, that they can provide reliability in applying their policies firmly and consistently to content that violates their community standards.

Previous
Previous

Repeat Offenders: Voting Misinformation on Twitter in the 2020 United States Election

Next
Next

Seeking To Help and Doing Harm: The Case of Well-Intentioned Misinformation