Seeking To Help and Doing Harm: The Case of Well-Intentioned Misinformation

Authors: Cooper Raterink, Yesenia Ulloa, Tara Kheradpir (Stanford Internet Observatory);
Emerson T. Brooking, Jacqueline Malaret (DFRLab)

Contributors: Daniel Bush (Stanford Internet Observatory); Max Rizzuto (DFRLab)


Main Takeaways

  • Well-intentioned misinformation takes two main forms:

    • Grassroots — voters share a piece of content (e.g. an image, Tweet, or video) wishing to educate, warn, or protect, but instead mislead each other due to inaccuracies in the content, perhaps inadvertently suppressing participation

    • Public Awareness Campaign — nonprofit or other advocacy groups trying to inform on voting procedures make a small error or fail to consider all cases when designing their information campaign

  • When sharing information online, the truthfulness of the content matters more than the intentions of those who shared it. Follow the best practices (Prepare, Verify, Rely) described at the end of this blog post to make sure you have a positive impact on the information ecosystem around the election.

Background on Well-Intentioned Misinformation

The Election Integrity Partnership has identified several instances of apparently well-meaning but misleading content, which typically takes the form of flawed public-service announcements or calls to action. Such content that seeks to help but does harm is best characterized as well-intentioned misinformation: false or misleading information which — if it were not false — would be spread toward constructive ends. This post is motivated by the idea that by describing prototypical cases of well-intentioned misinformation, and by outlining best practices to contain its spread, we might reduce the threat it poses to the upcoming election. 

Well-intentioned misinformation often aims to protect, inform, or warn voters in a way that might encourage voter safety or election integrity, but misses its mark due to inaccuracies in the content. In studying well-intentioned misinformation, we found the following rule-of-thumb test to be helpful: if the inaccurate parts of the content were true, would the message promote a free, fair, and safe election?

Consider, for example, an earlier post in which the EIP assessed a misleading claim circulating among survivors of domestic abuse. The content alleged that survivors hiding from their abusers could not vote, as their voter registration information would be a matter of public record. This claim was false, as it did not account for numerous state-level laws that are specifically intended to shield the identities of survivors. But the claim was intended to be constructive, warning survivors about an imminent danger to their anonymity. For this reason, it is best understood as well-intentioned misinformation. Contrast this example with that of a message using misleading evidence to reinforce a claim that the election is “rigged.” Even if the evidence shared with the message was reliable and relevant, the message would be antagonistic or (at best) neutral to the goals of election integrity.

To gain a better understanding of well-intentioned misinformation, we took a closer look at a few pieces of it which the EIP uncovered this election season. In doing so, we found that most well-intentioned misinformation is specific and procedural in nature, and outlined the most impactful examples in this blog post. We used this analysis as the basis to identify best practices for sharing election-related information online. These best practices are presented at the end of this post.

Determining the Prevalence of Well-Intentioned Misinformation

To determine which topical areas are most likely to contain well-intentioned misinformation, we analyzed 127 pieces of misinformation content collected by the EIP from September 3 to October 9. For each piece of content, we determined whether categorizing it as “well-intentioned misinformation” was applicable, borderline applicable, or inapplicable. We further divided content into the twelve narrative categories tracked by the EIP, enabling us to identify the narratives most commonly associated with well-intentioned misinformation.

An examination of 127 pieces of misinformation collected by the EIP, divided by narrative category, and evaluated by the definition of well-intentioned misinformation. Narrative categories for which no well-intentioned misinformation was present are…

An examination of 127 pieces of misinformation collected by the EIP, divided by narrative category, and evaluated by the definition of well-intentioned misinformation. Narrative categories for which no well-intentioned misinformation was present are excluded.

Our analysis indicates that well-intentioned misinformation is most likely to be specific and procedural. “Specific, procedural misinformation about how to vote” was the most prevalent narrative by far, with 61 percent of clear cases of well-intentioned misinformation falling within this category. This suggests that well-intentioned misinformation most commonly arises over confusion related to voting processes. 

We also find that well-intentioned misinformation is only rarely related to election delegitimization or misleading claims about voter fraud, and only occasionally to participation interference.This makes intuitive sense: voters who engage in constructive election-related messaging are unlikely to spread content intended to suppress voter participation or call the legitimacy of the entire process into question.

We further observed that well-intentioned misinformation can be categorized into two distinct types depending on who is spreading it: content which appears to arise from an organic, grassroots effort, and content which is spread by a well-meaning nonprofit or similar advocacy group. We explore these two types of well-intentioned misinformation in case studies below.

Case Study 1: Grassroots-Style Well-Intentioned Misinformation

We found a majority of well-intentioned misinformation tracked by the EIP to be roughly nonpartisan and organic in nature. This grassroots-style misinformation aimed to clarify or inform voters about procedural nuances associated with voting, and generally to ensure more votes are cast and counted. While the content misses this mark due to inaccuracies, the users promoting it seemed to either be unaware of the errors or simply confused.

For example, in one case, a post in which a user claimed their friend had attended a poll worker training and wanted to share their learnings from the class went viral across platforms. The user asserted that if a poll worker made a mark (i.e. a letter, checkmark, star, or partisan-affiliation) on a ballot, the ballot could be disqualified. In reality in some states, a mark on a ballot is intended to provide an identifying number used for legitimate purposes by election officials. Given the localized nature of US election procedures and rules, the information in the post does not apply nationally and could confuse or cause undue stress on voters and their election support systems.

friend_of_a_friend.png

This “friend-of-a-friend” story spread in a chain message manner with reposts on TikTok, Facebook, and Twitter. The EIP notified the three companies involved, with TikTok removing the post, Facebook applying a warning label, and no response from Twitter. The EIP received reports of this content from election officials in two states, who posted their own clarifications about the narrative (see image below). 

Cherokee County Elections Department posts a clarification to its Facebook page, addressing the “friend-of-a-friend” message and correcting the well-intentioned misinformation.

Cherokee County Elections Department posts a clarification to its Facebook page, addressing the “friend-of-a-friend” message and correcting the well-intentioned misinformation.

What classifies this post as grassroots-style well-intentioned misinformation is a) the lack of a connection to a broader, top-down messaging campaign. As far as the EIP can tell, this post is the first instance of this narrative, indicating that the user who posted it generated the content; and b) the message of the post is primarily intended to protect voters. The post inaccurately reflects the policies of poll workers — it may sow doubt into the minds of voters about the trustworthiness of these poll workers. However, the narrative seems to be aimed at raising voter awareness about ballot integrity. Rather than discouraging voters from submitting their ballot, it encourages them to request a new ballot. 

Given the unprecedented scale of mail-in ballots in this year’s election, a patchwork of local election policies, and known attempts from malicious actors to sow further confusion around election integrity, voters are understandably wary of making mistakes that could jeopardize the validity of their ballot. As a nation, we are in uncharted waters and many voters are engaging civically to help others navigate the election process. So it is to be expected that these voters sometimes err in their mission to demystify the voting process and inadvertently spread misinformation. Nevertheless, misinformation which effectively delegitimizes the election or misleads voters — regardless of original intention — can obstruct voting processes and contribute to voter suppression.

Case Study 2: Well-Intentioned Misinformation in Election Infographics

A significant portion of well-intentioned misinformation was broadcasted by non-profit organizations and other advocacy groups publishing election-related information containing small errors. For example, on October 11 EIP analysts identified an Instagram infographic which contained slightly misleading information on voting procedures. The infographic lists seven ‘common mistakes’ to avoid that may disqualify a voter’s ballot. 

infographic_misinformation.png

The infographic correctly informs voters of some best practices, such as including sufficient identification with the ballot if required by state law, but it also features partially misleading information. For example, interpreted literally the first point implies that if a voter filled out their ballot in anything other than black or blue ink their ballot would be disqualified. However, this policy does not exist in every state. For instance, the deputy administrator of the Maryland State Board of Elections confirmed that Maryland ballots marked with any pen color or even pencil are valid.  

A further error in the infographic relates to point 5 claiming voters need to write the “complete date” on their ballot, as there is no policy currently that will disqualify a ballot for abbreviating “2020” to “20”. One user commented on the post, “My partner put no date and according to this graphic I put the date wrong.”

The organization which created and originally posted the checklist, 1votecloser, is a California 503(c)(3) nonprofit. Its social media presence is relatively small (32 Followers on Twitter, 395 Page Likes on Facebook, 1.7k followers on Instagram), but this Instagram post received over 55,000 likes as of October 27. In response to audience feedback, 1votecloser changed the caption to clarify that voting requirements vary by state and county, but failed to address that writing out “2020” is only a precautionary measure. At best, 1votecloser’s post serves as a reminder to voters to ensure their ballot meets the proper requirements. At worst, this post may cause voters to doubt whether their ballot was successfully counted. It may even steer them to accidently attempt to vote twice, which is illegal and dangerous to a fair election. Widespread confusion may also overwhelm local officials’ ability to respond to a sudden large number of inbound voters with unfounded concerns about their ballot status, draining valuable resources away from actual incidents.

Curbing the Spread of Well-Intentioned Misinformation

Several factors could combine to create a scenario on election night in which there is an oversaturation of information, some accurate and some not, which will make it difficult for voters to find reliable guidance. Within this glut, bad actors have an opportunity to take advantage of communities by reporting false outcomes, making claims of voter suppression, spreading unsubstantiated reports of attacks on election infrastructure, asserting voter or ballot fraud, and seeding social media with other information intended to delegitimize the election. Well-intentioned but unwitting users may amplify this disinformation as it outpaces fact-checking or content moderation efforts. And as we have discussed, these users may even add their own well-intentioned misinformation to the mix.

All voters can play a role in preventing the spread of obvious falsehoods or amplifying unfounded rumors and fears. A good start can be made by following these steps:

  1. Prepare for delays in reporting, and be aware that we may not know the final result on election night. Expect that it may take longer than previous elections to count all the votes as states manage increased mail-in voting and absentee ballots. 

  2. Verify reports about problems in voting or election results through multiple reliable sources. Regardless of whether the information comes from your immediate network or not, take the time to examine it before passing it on or accepting it as a fact. Check the link to a webpage to make sure that you are on the correct site. If you are unsure if something is true or false, try copying and pasting the headline into your search browser—see if other outlets report on the story, and if so, in what context.

  3. Rely on state and local election officials as the authoritative source of information. In most states, the Secretary of State is the chief election official who administers state elections and maintains official election results; use their website as a source for information. You can also refer to the US Election Assistance Commission’s voter resources, or check the website of your local county clerk. 





Previous
Previous

Evaluating Platform Election-Related Speech Policies

Next
Next

EIP Weekly Update: Oct. 27