Election Delegitimization: Coming to you Live

Authors: Samantha Bradshaw, David Thiel, Carly Miller, Renee DiResta


Livestream videos became an interesting front in the spread of misinformation in the 2020 election. For years now, the ability to stream has been available on a variety of social media platforms, including platforms dedicated solely to that purpose, and it has grown increasingly popular. In the 2020 US election, prominent influencers, media personalities, and ordinary people used livestream features to discuss the election, while at times perpetuating claims about social unrest and voter fraud. Unlike other types of social media content, increasingly popular live-streaming presents unique challenges not only for identifying mis- and disinformation, but limiting its spread and reach. Indeed, livestreams can have significant reach, with some online influencers reaching audiences comparable in size to those of mainstream news outlets. Specifically, during our 2020 election monitoring, we noticed three distinct emerging genres of livestreams, each employing distinct tactics and exploiting specific platform vulnerabilities.

In-the-streets documentarians

The first genre of livestreaming focuses on documenting protests, emerging events or rallies. Often this is done without commentary, or with minimal narration, though there are occasional monologues by the streamer or interviews with bystanders. For example, protests in the Pacific Northwest were often documented by multiple streamers who followed the action, recording events absent commentary or interpretation from the streamer. These types of streams produce footage that lends itself to subsequent clipping and aggregation; the content is replayed within other streams or snippets of content are cut to fit the time limits of other platforms. On Twitch in particular, some streams consist entirely of cycling between aggregated protest livestreams produced by other streamers. 

The distribution of these records of action can be co-opted by manipulative actors: We have previously observed instances of protest footage being rebroadcast or looped under the guise of an active “livestream” seemingly with the intent to draw an audience to a channel for commercial reasons, often with inflammatory commentary and active misinformation (such as misattributing the location or date of the event). In the leadup to the election, Russian-backed media outlets broadcast protest footage on social media platforms to inflate perceptions of conflict and violence across the US. In one example, RT tweeted the same footage twice, attributing the location in one tweet to Portland and Washington DC in another.

“Guy at a Computer”

Another popular genre of livestream features a single person broadcasting themselves using the internet, browsing various websites and social media, and watching the livestreams of others (including mainstream news outlets, though these are cycled frequently to avoid legal issues). The streamer discusses what they are viewing, offering reactions and commentary for the audience in a kind of “man on the screen” variation on ordinary-person “man on the street” reaction commentary. These streams often have heavy user comment activity from the viewers, occasionally including user audio call-ins.

One interesting technical dynamic around this format is how it translates across platforms: videos will integrate user comments as they appear, aggregating and embedding that commentary and making it part of the video instead of separate content. This means that, for example, live comments on a Twitch video will show as part of the video image when cross posted to Facebook. In this example, any side commentary from stream watchers on Twitch that violates the terms of service of Facebook cannot be meaningfully acted upon by Facebook without taking action on the entire video.

Another issue is that the format can be used to amplify misinformation that’s already been acted on in its appearances in text or URL formats. The streamer is actively engaging with and restating the messages in the content, often from a multitude of platforms and users, leaving the platforms with no granular options for mitigation other than acting on the entire video or the streamer’s account.

Pundit 2.0

The third common format we observed on election night is the “political pundit” style of stream. It is typically produced by influencers with large audiences. While similar in some ways to “Guy at a Computer,” Pundit 2.0 streamers have highly produced videos closely resembling TV punditry, often with a full studio setup and multiple commentators. The involvement of multiple people can make it easier to do things like share interstitial screenshots, aggregate information from multiple sources, and cut between commentators, rather than just streaming the desktop of a single person. For example, Steven Crowder hosted an hours-long stream in which he aggregated footage from multiple sources discussing conspiratorial claims that the use of Sharpies resulted in uncounted votes, presented screenshots of ballot instructions submitted by users absent context, and repeated speculation about content that platforms had labeled as misleading when it appeared as text.

Live-streaming moderation and enforcement challenges

The livestream format presents a number of unique moderation challenges. One issue is granularity. For example, a specific tweet containing misinformation can be identified by its numeric tweet ID and restricted from being shared. In livestreams, many of which go on for hours, it is not possible to restrict only a small fraction of the extended video. Enforcement actions can either be taken against the entire video or channel, or not at all.

A second challenge is the line between documenting events that are happening (journalistic livestreams), and actively facilitating events to amplify a point of view. An individual streaming a protest where people chant “Stop the Steal” or yell slogans tied to election-related misinformation that platforms have previously moderated is documenting something that’s happening. Broadcasting protests that express a political point of view is well within the bounds of free expression, and so is discussing them - such things regularly happen on talk radio and other information channels. However, given the large audiences and amplification of popular streams, the ease of shareability, and the persistence of the videos, platforms must decide what to do if an organizer of such a protest is streaming while adding commentary or doing interviews in which the speaker repeats misinformation that has been actioned, fact-checked, taken down, or had reduced distribution in other forms. For example, during ballot counting in Arizona, protests were livestreamed by the person who had organized the gathering; the organizer was actively propagating false “SharpieGate” narratives both in their own commentary and in interviews.

Finally, not all lives remain online after they have been streamed. This presents challenges for researchers and civil society organizations who want to study the impact of disinformation in livestreams, because there is no record of the stream to collect. This “disappearing data” also poses challenges for looking at broader trends of disinformation or tracing the networks of actors who coordinate misleading narratives via livestreams. 

Policy Recommendations

Social media platforms have different policies outlining the guidelines for livestreaming. Most of the time, platforms rely on their general terms of use or community standards when deciding to restrict livestreams. However, platforms do not always specify when or how general policies or community standards apply to livestream content or accounts. In addition to these general policies, platforms also have specific livestreaming policies. Some of these policies describe abusive livestream content under categories such as terrorism, child exploitation, and intellectual property infringement. For example, following the 2019 terror attack in Christchurch, Facebook announced new restrictions to live to limit the spread of “terror propaganda” through these features. However, most livestream policies do not use specific language around mis-or-disinformation.

As detailed in our previous blog posts, the platforms also rolled out several election-related policies that address specific kinds of civic content and speech, including new rules or clarifications about election misinformation and interference. However, aside from TikTok, the election-related updates to the policies did not mention how they specifically apply to livestreams. Additionally, some platforms such as Twitch, do not have any election-specific policies.  

As described above, the miscellany of relevant policies associated with livestreams has meant the rules governing when and how platforms demote or remove content and accounts is unclear. In Table 1, we provide an overview of the policies on or applicable to livestreaming for five platforms that offer these features (Facebook, TikTok, Twitch, Twitter/Periscope and YouTube). We included columns comparing how the platforms managed issues around civic content or speech, such as interference with democratic processes, voter suppression, or incitement of violence or hate. Finally, we included a column that compares the different enforcement actions platforms take against livestreamers who break the rules.

Table 1: Overview of Platform Policies on or Applicable to Livestreams

Table 1 Description: A breakdown of platform policies that intersect both election-related content and livestreams. This table was informed by Facebook’s 2019 Livestream Policy, Community Standards, and Elections Policy; TikTok’s Elections Policy; T…

Table 1 Description: A breakdown of platform policies that intersect both election-related content and livestreams. This table was informed by Facebook’s 2019 Livestream Policy, Community Standards, and Elections Policy; TikTok’s Elections Policy; Twitch’s General Policy; Twitter and Periscope’s General Policy and Elections Policy; and YouTube’s General Policy,  Elections Policy, and Livestream Policy 

*Twitch defines misinformation as: “feigning distress, posting misleading metadata, or intentional channel miscategorization”

Currently, most platforms do not yet have election misinformation policies that sufficiently take into account the unique facets of livestreams. In addition to outlining clear rules for applying existing content moderation and election-related frameworks to livestreams, specific livestream policies could also include:

  1. Establishing clear rules on live comment embedding. As described above, some livestreams embed comments from other platforms into their videos. Platforms should establish clear rules that could either disallow the practice entirely, or hold video producers responsible for user comments embedded in the video by demonetizing or removing videos wholesale if the comments violate their policies. 

  2. Establishing greater friction and consequences for users and accounts who violate Livestream policies: This could include, placing a time delay on repeat offenders, introducing sharing friction by limiting push notifications or recommendations to violating streams, or deplatforming violating accounts all together. 


Lives are growing in popularity. They’re an effective way to reach a large audience with entertaining content, and therefore it’s important that platforms ensure that appropriate policies are in place to maximize free expression while minimizing the loopholes that presently enable streamers to evade the constraints that apply to other features. 

Next
Next

Vote Data Patterns used to Delegitimize the Election Results