Platform Policy Analysis 2022

A ballot drop box in Chicago from the 2020 U.S. elections.

Photo above: Ballot drop boxes in Chicago from the 2020 U.S. elections by Seth Anderson via Flickr / CC BY-NC-SA 2.0.

This Election Integrity Partnership analysis was co-authored by Frances Schroeder, Christopher Giles, Dan Bateyko, Emma Lurie, Ilari Papa, John Perrino, Mishaela Robison, and Emily Tianshi (Stanford Internet Observatory), and Charles Simon and Sukrit Venkatagiri (University of Washington Center for an Informed Public).

Introduction

Good information is critical to free and fair elections. Voters should know how, what time and where to vote, and have faith that once a ballot is cast, it will be counted in a secure and fair way. Accurate information aids the oversight of elections, helping the public evaluate the election process fairly, and informing whether and how to challenge outcomes. It should aid the oversight of elections, and help identify threats to officials undertaking that role. Cycles of good information create improved elections processes, trust, and space for accountability.

Bad information does the opposite. False and misleading claims may increase the potential that real threats or interference are ignored, as attention is spent on the fake ones. Public perceptions of elections can become unmoored from facts. Election officials may receive threats or harassment if claims about them or their work spread among an angry public who have been misled to believe that an election in their jurisdiction was unfairly conducted. 

In 2020, the Election Integrity Partnership (EIP) assessed platform policies as part of its analysis into election misinformation on social media platforms. Our assessment highlighted the critical need for platforms to clearly and transparently develop and enforce comprehensive policies. Platform policies shape how information propagates; tech company rules determine what is or is not allowed to be posted on their platforms and apps, as well as what content may be labeled or limited in distribution. While our work examines the U.S. elections, policies are often global. Therefore, it’s important that content creators as well as the voting public understand how these rules impact the discourse around their elections.

In 2022, as the U.S. midterms approach, the EIP once again assessed to what extent platforms have implemented or changed their election policies, both in terms of their scope and their enforcement. We additionally assess newer platforms that have entered this space and become prominent in the intervening years. This document provides an evaluation of platforms’ policies concerning election integrity. Platform policies serve a number of purposes, such as informing users of a platform’s public priorities, values, and rules. To evaluate the platform policies, we looked at how well their publicly available documents covered five categories (further defined below) that the Election Integrity Partnership considers urgent:

  • Procedural interference

  • Participation interference

  • Solicitation of fraud or other election-related misconduct

  • Delegitimization of elections

  • Threats against (or harassment of) election personnel

We evaluated platforms’ public policy pages, as well as company blogs and press announcements. The platforms analyzed in this report had different levels of explanation for their rules, and policies were often decentralized across a number of site pages. Our analysis is limited to English-language policies.

Findings

  • Every company in our assessment has room to improve their election-related policies to make clear what content is allowed and how policies will be enforced.

  • Alternative platforms reviewed — Telegram, Gab, Truth Social, Gettr, Rumble — have general community guidelines or terms of service, but no policies explicitly around election-related content.

  • Companies with multiple social media products are at times unclear in how their policies apply across their products. For example, it is not always clear (to our analysts) to what extent Meta’s policies apply across Facebook and Instagram, or how Google’s policies apply across Drive and YouTube. 

  • Platforms do not always specify if they enforce election-related policies year-round or just during the election season; as election-related violative content also spreads in the off-season, clarity becomes more urgent. 

  • Most platforms do not share how policies evolve over time, although many announce policy changes. This analysis can serve as a point in time comparison of policies as they stood in 2020 with policies as they stand today. 

  • Many platforms publish transparency reports on their policy enforcement, however, those reports are often not granular enough to allow us to assess their enforcement of policies relevant to election integrity. 

  • Many platforms govern content they determine to be in the public interest differently (i.e., some commentary from politicians left up for newsworthiness), but are unclear about when those exceptions apply. 

  • Most platforms do not have policies that specifically protect election workers. Election-specific harassment rules clarify that a platform has stepped up enforcement during a critical time. Whether these policies are necessary is an open debate, as individuals may already be covered under existing threats and harassment policies. 

Recommendations and Looking Toward 2024

  • Companies should centralize and be transparent about platform policy changes over time. In 2020 and 2021, researchers called for platforms to consolidate their policies in one place, rather than splitting them across blog posts, community guidelines, and other updates. Efforts to collect tech company election-related policies, like the Tech Platforms Election Database, underscore this problem. Two years after our original recommendation, many companies have not fixed this issue. 

  • Policies should be clearer about enforcement and types of prohibited content. Some platforms provided examples of prohibited content when discussing policies, others did not. Enforcement and how it relates to specific policy violations was at times ambiguous. In a study assessing whether Facebook and Twitter consistently applied labels to the same kinds of misleading, election-related content, researchers drew on Election Integrity Partnership data to find that even within platforms labeling was applied inconsistently.

  • Platforms need to provide greater researcher transparency and access to data for independent analysts so that they can evaluate the effectiveness of policies and enforcement. 

Summary of Comparative Evaluation of Platform Policies

In an effort to assess social media platforms’ election-related policies and to compare the policies across the platforms, we developed a list of criteria within each of our five categories. For each platform, we searched for and evaluated companies’ terms of service, community guidelines, blog posts and press statements to determine (to the best of our ability) whether their published policies addressed each criteria in a way that was clear or vague, or whether they lacked an applicable policy altogether, as defined below. We included the social media platforms Facebook, Instagram, Twitter, TikTok, and YouTube due to their widespread popularity and frequent use as a political forum. In this comparison, we did not include other platforms, such as Telegram, Gettr, Gab, Truth Social, and Rumble, as they do not have any specific policies addressing election integrity as of the time of the review (Summer 2022).

Figure 1: Key for Comparative Evaluation of Platform Policies. 

Disclaimer: We have determined a rating for each platform solely based on publicly available information. This post does not seek to assess or analyze enforcement of these policies on the platforms. Platforms may also have general rules which cover our criteria – for example, calls to participate in election fraud, such as by selling votes, may be covered under existing policies against illegal activity. We looked for specific language regarding our criteria to determine our ratings.

Figure 2: Comparative Evaluation of Platform Policies. This figure displays our assessment of Facebook, Instagram, Twitter, TikTok, and YouTube’s platform policies across the criteria we consider important for election integrity.

Definitions

 EIP is concerned with five types of election-related content:

  • Procedural interference: Misleading or uncorroborated content about election procedures, such as when and how to vote (e.g., “your ballot won’t be counted unless you include a photograph of yourself” or “in-person voting is canceled this year”).

  • Participation interference: Misleading or uncorroborated content that could serve to intimidate or dissuade people from participating in the election. Related to voter intimidation or suppression of a person’s desire to vote (e.g., “lines are too long to make it worth it,” “cops are at the polling station,” “they are collecting government debts at the polls”).

  • Solicitation of fraud or other election-related misconduct: Content encouraging people to misrepresent themselves to affect the electoral process, illegally cast or destroy ballots, or participate in illegal forms of election interference (offering to buy or sell votes, encouraging people to forge signatures, voter intimidation, “unplugging the voting machines”).

  • Delegitimization of elections: Misleading, uncorroborated, or superficially true content about the election process (e.g., “The election is being stolen, look at this [manipulated media]”) that undermines confidence in the integrity of the election or the accuracy of results.

  • Threats against (or harassment of) election personnel: Explicit or implied threats against election personnel. Includes both direct threats and language that encourages threats or harassment.


Comprehensive and non-comprehensive: We regard a comprehensive policy as one that specifically relates to our categories of concern for election-related rumors, and which outlines further explanation of the policy and gives examples.

Platforms

Telegram, Gettr, Gab, Truth Social, Rumble

Since 2020, “alt tech platforms” – a term for social media alternatives to popular platforms like Facebook and Twitter – have grown in popularity. [1] Some perceive these platforms have fewer governance and content moderation rules than mainstream platforms. According to research by Pew, six percent of Americans regularly obtain news from “at least one of the seven alternative social media sites” — which included platforms like Gettr, Gab, Truth Social and Rumble.

We assessed platform policies on Telegram, Gettr, Gab, Truth Social, and Rumble. In our assessment, none of these platforms have specific policies addressing election integrity. This finding echoes that of other scholars who contend that alt-tech platforms may share similar high-level policies to mainstream platforms, but lack the same specificity.

All five platforms have policies around physical harm, threats of violence, and harassment. These policies may apply to threats against election personnel or those participating in the election. Rumble’s policy states that the company may monitor and remove messages in comments, on the live chat, or on its forum that incite violence. Rumble's policies also spell out rules against content which "promotes, supports, or incites violence or unlawful acts" or "promotes, supports, or incites individuals and/or groups which engage in violence or unlawful acts." Gettr’s policy states that the platform does not “tolerate members of our community being harassed or incessantly bullied, particularly in cases that would reasonably cause severe psychological distress.” 

Two platforms — Truth Social and Rumble — mention explicit policies around false, inaccurate, or misleading content, which may apply to content delegitimizing election results. Truth Social states that when users post content “you thereby represent and warrant that your Contributions are not false, inaccurate, or misleading” and that use of the service violating these terms may result in suspension or termination of a user’s right to use the service. Rumble’s policy states that “any person who knowingly materially misrepresents that material or activity is infringing may be subject to liability for damages. Don't make false claims!” (emphasis in original). 

Though these platforms appear to lack explicit election-related policies, many do have other policies, such as those on illegal activities or prohibiting calls to violence, that could be interpreted as applicable. For example, threats against election workers may fall under policies that prohibit violence and harassment. Policies around misleading content or misrepresentation may apply to procedural interference, participation interference, fraud, and delegitimization of elections. 

We found most “alt tech” policies lacked specific description of potential enforcements that mainstream platforms had. For example, we did not find policies within alt tech platforms around labeling content or providing information or links to authoritative sources on how and when to vote in the midterm elections. 

Meta

Meta’s policies approaching the midterm elections, announced in August 2022, are similar to its approach during the 2020 U.S. presidential election. The policies focus on providing information about the voting process itself, such as how citizens can vote, and elevating election guidance from state and local officials. 

Despite having the same parent company, Facebook and Instagram appear to follow different sets of policies around election misinformation. Per our policy comparison table above, Facebook provides clear policies for seven out of the 10 criteria we consider important for elections, while Instagram only provides explicit policies for two of the 10 criteria. This gap in clarity between the two products’ policies around election-related content may confuse users of both Facebook and Instagram, and raises concerns around enforcement transparency and consistency. 

Overall, Facebook’s position is to 1) reduce the spread of election rumors; 2) label false or misleading content; and 3) amplify accurate information on how to participate in the midterms. Facebook’s policy documents indicate that content removal is reserved for a narrow set of content and situations. Facebook’s stated scope for their election policies focuses on content about voter interference and addressing misinformation. 

Election misinformation content may be moderated under other policies related to hate speech, promoting violence or coordinating harm, or dangerous individuals and organizations. For example, Facebook, in an effort to limit misinformation that may lead to violence, has banned militia and QAnon Pages and Groups, and removed the original Stop The Steal Group under its Coordinating Harm policy. 

In its Transparency Center, Facebook states it will remove content that violates its community guidelines. When content violates community standards, Facebook follows a strike system to determine appropriate restrictions on accounts, though whether they apply a strike “depends on the severity of the content,” and all strikes expire after a year. 

For other kinds of “problematic” content that does not violate its Community Standards, including “misinformation” and “content by creators that repeatedly violate [Facebook’s] policies,” Facebook may reduce the distribution of the content to users. 

Facebook’s election-related policies contain certain enforcement gaps. “For public awareness,” Facebook labels and leaves up “some posts that go against [its] Community Standards, if the public interest outweighs the risk of harm.” Additionally, [some] politicians are exempt from fact-checking.

Facebook made updates to its policies after the 2020 U.S. election:

  • Procedural Interference: 

    • Facebook changed the word “Misrepresentation” to “Misinformation” in their policies. The implications of this change are unclear.

  • Participation interference

    • In 2020, Facebook had a policy against users claiming Immigration and Customs Enforcement (ICE) were at a polling location. Now, Facebook distinguishes between real reports and false reports of ICE at polling stations, enforcing against the latter – a move which lends clarity to their policy

    • In 2020, Facebook announced plans to remove calls for poll-watchers that contained intimidating or “militarized language”; it is unclear whether Facebook still removes such language. 

  • Threats against Election personnel: In their “Coordinating Harm and Promoting Crime” policy, Facebook did not have a policy relating to election officials before the 2020 U.S. election. The current policy on election officials requires additional context to enforce.

Twitter 

On August 11, 2022, Twitter announced the activation of its Civic Integrity Policy aimed at protecting election processes for the U.S. midterm elections. Separate from these election-specific policies, Twitter has “Twitter Rules” which address a range of platform abuse types, including violence, abuse, and hateful conduct. 

Twitter’s policies specify four categories of misleading content related to elections: misleading information about participation; suppression and intimidation; misleading information about election outcomes; and false and misleading affiliation. 

Twitter states that it will remove or label content that violates its policies on election integrity. The August 2022 update on election policies explains that the platform will add labels to misleading tweets to provide additional context. It describes the labels as a way to provide additional information about the tweet’s content or as a way to direct users’ towards trustworthy information. Twitter also may reduce the visibility of tweets that violate its policies. In the case of “potential harm” (no more detailed definition provided), Twitter said it would prohibit sharing or liking of such content, but not remove it. It is unclear how aggressively Twitter will enforce these content moderation policies and what qualifies for removal versus downranking. 

Twitter’s policy documents do not clearly explain the contexts that will lead Twitter to reduce the spread of a piece of content. Twitter’s documentation indicates that the severity of the content combined with the user’s post history guides Twitter’s enforcement of its policies. Twitter may also require users to delete posts. 

Twitter’s enforcement policy considers a number of characteristics relating to the content. For example, Twitter considers the context, whether it qualifies as public interest, the severity, and the user’s history of violating policies. It is not obvious what level of severity meets the criteria of labeling or removal.

Embed Block
Add an embed URL or code. Learn more

TikTok

TikTok’s policy approach to election information is a blend of promoting official and authoritative information and enforcing its Community Guidelines, which includes prohibiting election misinformation and threats directed at election workers. Similar to other platforms, TikTok has partnered with fact-checking organizations. TikTok does not accept paid political ads and has extended this ban to the accounts of political candidates, who previously could ask for gifts or donations through their channels.

TikTok will either remove, redirect search to TikTok’s Community Guidelines, or reduce discoverability of content that violates its community standards. The platform provides examples of election misinformation it takes action on. Tiktok states that it will remove “false claims that seek to erode trust in public institutions, such as claims of voter fraud resulting from voting by mail or claims that your vote won’t count; content that misrepresents the date of an election; attempts to intimidate voters or suppress voting; and more.” TikTok may exclude misleading content from its moderation efforts if it meets a standard of “newsworthiness” except for content that incites violence. For other “content that shares unverified claims, such as a premature declaration of victory before results are confirmed; speculation about a candidate’s health; [and] claims relating to polling stations on election day that have not yet been verified,” TikTok will reduce the discoverability, either “redirecting search results or making such content ineligible for recommendation into anyone’s For You feed.”

YouTube

YouTube has policies aimed at combating misinformation, including content that misleads voters on how to vote, encourages interference in the democratic process, incites violence, or advances specific types of election misinformation. YouTube announced in September 2022 that it had already begun to remove videos with false or misleading information related to the U.S. midterm elections. 

YouTube’s approach includes promoting reliable and accurate sources of information to help people participate in voting. When users search for content relating to the midterms, YouTube policy states that it will prioritize recommending content from “reliable sources.” The platform’s policies state that it will limit the spread of “borderline” content. 

In its election-related policies, YouTube specifies five types of prohibited harmful misinformation: misleading posts about voting times and dates; candidate eligibility; interference with voting procedures; distribution of hacked materials “with the intent to interfere in an election”; and false claims about widespread fraud at previous U.S. elections. YouTube provides a disclaimer that “this isn't a complete list.” YouTube states that it will remove content that violates these policies and it will terminate channels that have acquired three strikes within 90 days.

Notes

[1] Mariëlle Wijermars & Tetyana Lokot (2022) Is Telegram a “harbinger of freedom”? The performance, practices, and perception of platforms as political actors in authoritarian states, Post-Soviet Affairs, 38:1-2, 125-145, DOI: 10.1080/1060586X.2022.2030645

Previous
Previous

Election Vulnerability Disclosure Becomes Fodder for Dueling Conspiratorial Narratives on Telegram

Next
Next

Implying Intentionality: Understanding Unsubstantiated Allegations Around Election Administration Mistakes