Inconsistencies in State-Controlled Media Labeling

Contributors : Nicole Buckley, Morgan Wack, Joey Schafer, Martin Zhang, University of Washington Center for an Informed Public


Thirty days out from the U.S. presidential election, platform policies intended to identify state-media influence operations are quickly evolving. However, there is a significant gap between platforms’ policy language and actual policy implementation. In order to evolve effectively, Facebook, Instagram, Twitter, and YouTube must fully implement terms of use policies governing how, why, and when labels are appended to foreign state-controlled media accounts, pages, and posts. Because of previous state-controlled media involvement in U.S. elections, it is critical that their capacity to create and spread disinformation is diminished to the fullest extent possible. 

State-controlled media influence activity is not limited to covert front properties, or dark propaganda — the operations run by entities like the Internet Research Agency. In 2020, overt state-controlled media properties are active participants in political conversation. Many overt state-controlled media properties, directly funded by governments that exert strong editorial control, have increasingly large audiences on several prominent social media platforms. They use advertisements as well as other types of posts — and, more recently, live video streams — to grow their audiences. Often, users encounter these types of content without proactively looking for them. Labels are therefore meant to add context to the content curated by these platforms given that not all users are experts in discerning which of the entities that appear in their feed are state-controlled media outlets. 

Despite platforms’ detailed policy updates, however, label application can be sparse, inconsistent, and difficult to locate. With early voting already underway in many states, there is an urgent need for both transparency about policy application as well as a commitment to consistency in labeling. Ultimately, platforms are correct in recognizing the potential harm that flows from granting state-controlled media accounts free rein to operate inconspicuously. However, platforms not only risk their intentions being undone by poor implementation, but jeopardize the legitimacy of the upcoming election.

This post examines the current policies of social media platforms’ related to the labeling of state-controlled media entities.

Key Takeaways

  • In 2018, platforms first introduced state-media labeling policies designed to counter the impact of foreign influence operations on U.S. elections. As the 2020 U.S. general election approaches, platforms have released a greater volume of updates to labeling policies over a shorter period of time.  

  • Although platforms’ labeling policies cast a broad net, state-controlled media outlets’ accounts, pages, and posts are not always accurately labeled, when they are labeled at all, even when they fall within a particular policy’s scope. Inconsistent implementation reduces users’ ability to identify state-controlled messaging. 

  • Facebook and Instagram apply labels most inconsistently against their respective policies. Additionally, these platforms’ newest features lack state-media labels entirely on most mobile and desktop interfaces. 

  • We recommend that platforms turn their attention to implementing existing state-media policy as consistently, and as quickly, as possible. We further advocate for platforms to make existing labels more obvious and for platforms to provide a searchable database of labeled accounts.

Platforms and Policies

In the interest of allowing a wide variety of perspectives to exist on a global platform, academics and civil society organizations have argued that platforms should allow state media outlets to post, but should provide disclosures to alert the viewer that the content comes from a government-aligned outlet. Experts have suggested drawing on the existing framework offered by the Foreign Agents Registration Act (FARA). Intended to preserve freedom of expression and open discourse, yet ensure that audiences understood the funding and incentives of the media they came across, FARA requires government-funded state-media outlets to register with the U.S. Department of Justice and label their material. Social networks, experts argued, could use FARA’s “registrant list” model as a basis for labeling ahead of the 2020 U.S. presidential election.

Between 2017 and 2020, platforms updated their policy language to address widespread concern about foreign influence operations. Google was the first to release a policy relating to its state-controlled media labeling practices; state-funded accounts now receive instructive labels and warnings across Google Ads and Gmail, respectively. In 2018, YouTube, a Google subsidiary, announced its decision to append declaratory notices to state-controlled media accounts. Still, Facebook and Twitter, among others, did not provide a policy response geared to limit the influence of foreign actors on American elections until 2019. On October 21, 2019, Facebook rolled out new policy language on its “about” domain. The language asserts Facebook’s commitment to stop election interference through increased  transparency, advanced in part by its choice to label “state-controlled media on their Page and in our Ad library.” Then, in June 2020, Facebook again expanded its labeling policy to include state affiliation notices under “Page Transparency” tabs. Although Twitter banned all forms of state-sponsored political advertising in 2019, it did not produce a policy response to the continued presence of state-controlled media accounts on its platform until August 2020. In August, Twitter updated its policy and began to label government and state-affiliated media accounts; it also labels prominent editors’ and reporters’ accounts.

Image 1: Timeline of platform policy updates. Only YouTube articulated their labeling policy prior to summer 2020

Image 1: Timeline of platform policy updates. Only YouTube articulated their labeling policy prior to summer 2020

As the above timeline shows, most platforms have only recently articulated their state-labeling policy. If labels are able to serve as a bulwark against foreign state actors’ influence on U.S. politics, delays in the labeling process minimize their impact during this election cycle and diminish the likelihood that labels will have significant impact on either account subscriptions or views. The arrival of platforms’ policies and analyses of their impact are further complicated by differences in how labels are applied. Table One, below, details the extent of policy adoption as well as differences between platform policies.

PLATFORM STATE-MEDIA LABEL POLICY? LABEL ON ACCOUNT NAME OR PAGE? LABEL ON CONTENT? LABEL IN SEARCH RESULTS?
YouTube Yes : Announced February 2018 No Yes No
Facebook Yes: Announced June 2020 Yes: Side of Page Yes No
Twitter Yes: Announced August 2020 Yes: Top of Page Yes Yes
Instagram Yes: Updated September 2020 Yes: Top of Page Yes No
TikTok No No No No

Excluding TikTok, each major platform, all of which host prominent state-controlled media accounts, have labeling policies. Below, we will further explore the consistency in product implementation and coverage of these policies. 

Product Implementation

To illustrate differences in implementation, we focus on how platforms have labeled Russian state-controlled media outlets. Because Russian state-controlled media amplified a substantial amount of disinformation during the 2016 general election, we draw primarily on examples from RT (formerly Russia Today, a state-run media entity based in Moscow). Beginning with YouTube, we found that the platform applies labels to the videos RT uploads, but has not labeled RT’s account page:

RT YT Main.png

The label applied to videos RT posts reads “Russia Today is funded in whole or in part by the Russian Government”:

RT YT 2.png

YouTube’s labels are unique in that they include a link to a Wikipedia page associated with the entity. YouTube notes that Wikipedia links are a part of its “information panel,” which means to provide “additional information to help you better understand the sources of news content that you watch on YouTube.” YouTube’s decision to use Wikipedia as an informational authority on state-controlled media actors is controversial

The second platform to introduce state-controlled media labels was Facebook, which in June 2020 began applying labels to both account pages and content. Facebook’s current policy requires labels to identify the country with which a state-controlled media account is affiliated; the label appears below the account’s name in each post it publishes. For example, RT’s Facebook posts include labels that read “Russia state-controlled media”:

RT FB Post.png

Facebook also includes labels on RT’s homepage under the subsection “Page Transparency.” The label also contains an icon specific to the page’s affiliation with state-controlled media. However, to view “Page Transparency,” users must scroll past three unrelated sections:

RT FB Page.png

In August 2020, Twitter began labeling account handles and user posts belonging to pages associated with foreign governments. Like Facebook, Twitter’s labels denote the country associated with the account and post in question. The labels read “Russia state-affiliated media”:

RT Tweet.png

In addition to its application to individual posts, the same label is also affixed to RT’s “biography,” which is the first section visible to users that visit its account page. Moreover, Twitter also uses a microphone icon to distinguish the label from account-entered biographical information. The icon also immediately precedes the text of each state-media label:

RT Twitter Profile.png

Until September 29, state-controlled media outlets with Instagram accounts could only be differentiated from ordinary media by clicking an “About This Account” tab located at the top of each users’ profile. After clicking through, interested parties could then view a label that read: “Russia State-Controlled Media”.

RT Old Insta.png

On September 29, Facebook released an update to Instagram’s policy, which noted that Instagram would add “these [state-media affiliation] labels on Instagram to posts and profiles.” Although individual posts are still unlabeled, state-controlled media outlets’ profile pages, such as RT, now contain labels visible to mobile application users. However, this change has yet to be applied to the desktop version of Instagram. Though no icon is currently applied to either, select Instagram pages now include the following label:

As noted in Table One, state-controlled media outlets’ TikTok accounts, including RT’s primary account, remain unlabeled on the platform:

The growing availability and use of temporary presentations and live features across social media platforms presents a new channel of influence for state-controlled media actors. These features contain content that is unlabeled on both Instagram (Instagram Stories) and Facebook (Facebook Stories, and in at least some cases, Facebook Watch and Facebook Live). Although Facebook and Instagram have demonstrated a capacity to label both temporary presentations and live features, attaching “sponsored” tags to advertisements, for example, those labels are not applied to state-controlled media accounts. Labels are also absent from state-media outlets’ stories when they are re-posted to ordinary users’ stories. 

To illustrate this policy implementation oversight, we reference a Facebook Story posted by In The Now, a page Facebook labeled as state-controlled. Although In The Now disputes Facebook’s characterization, we include its profile to demonstrate that, even when Facebook has applied a label to a page, that label is not visible on Stories.

RT Stories.png

Facebook Live has additional implementation issues. For example, Facebook Live videos are not labeled on mobile devices; further, labels are only occasionally visible on Facebook’s desktop interface. For example, in the following three screenshots of RT’s October 2, 2020 Facebook Live broadcast, mobile application users do not see a state-affiliated media label. Rather, a label is visible only to desktop users. However, they must click to expand the broadcast, at which point a subtle label appears in gray text at the top-right of the screen. Labels are similarly absent from other Facebook and Instagram features, such as Facebook Watch, Instagram Reels, and IGTV.  

RT FB Live.png

Twitter maintains one additional functionality that Facebook, Instagram, and YouTube do not. Twitter allows users to view state-media labels within account search results. On Facebook, Instagram, and YouTube, however, it is exceedingly difficult to surface state-media labels. Thus, the likelihood that users might follow, subscribe, or otherwise engage with state-controlled media from a “search results” page is an unmitigated loose end on three of the four major platforms we examined. Below are search results for the RT account on Twitter (labeled)  and YouTube (unlabeled); Neither Facebook or Instagram uses this type of label.

RT Twitter Search.png

In sum, the differences in platforms’ policy implementation and non-consensus on best practices across impact ordinary users’ ability to identify influential state-controlled media outlets in advance of a major election. 

Entity Coverage

Additionally troubling is platforms’ limited transparency about why labels are applied unevenly across state-media outlets’ pages and posts. Facebook, Instagram, Twitter, and YouTube are silent about what, if any, factors are used to determine when a “state-controlled media” label is appropriately applied. Generally, we observed that platforms’ broad policy language is implemented narrowly. Inconsistency between policy and practice leaves platform users uninformed as to what implicit factors might guide moderation teams’ choice to label or leave alone state influence operations. 

Beginning with Instagram, we found evidence of varying degrees of inconsistent policy implementation across all four platforms examined:

Instagram Comparison: CGTN America vs. CGTN Europe

RT Insta Comp.png

In October 2019, Facebook, Instagram’s parent company, announced it would apply labels to state-controlled media. In September 2020, Facebook made good on its promise; certain accounts, such as the above-pictured CGTN America Instagram page, received a label. CGTN Europe, however, also owned and operated by the Chinese government, notably lacks a tag. Instagram’s policy language suggests that both accounts should have a label; there is no context to suggest why one CGTN page is labeled and the other is left alone. 

Facebook comparison: In Question vs. Going Underground (both shows on RT)

RT FB Comp.png

In 2017, Facebook promised to label state-controlled media. However, its labeling policy is not applied universally. For example, some RT accounts, like the one depicted above (left), are appended with labels consistent with the most recent iteration of Facebook’s state-controlled media policy. However, other RT accounts — also owned and operated by the Russian government — are unlabeled. 

YouTube and Twitter

Inconsistent labeling appeared least commonly on Twitter and YouTube. On Twitter we observed, although limited to what we could find organically, that state-controlled accounts are consistently labeled. However, Twitter does not universally apply labels to accounts belonging to agents of state-controlled media outlets. That inconsistency is indicative of a unique gap between Twitter’s policy and practice. Nevertheless, Twitter exhibits the most consistent policy implementation among the four platforms we examined. 

Until 2020, YouTube struggled to consistently implement its state-media policy. In 2019, ProPublica reported that 57 state-controlled accounts were without labels on YouTube’s platform. It has since made improvements. As of October 2020, it appears YouTube has resolved those lapses in label coverage. However, labels’ content varies between state-media outlets. For example, German outlet DW earns a notice with passive language (“DW is a German public broadcast service”) whereas Russian outlet Redfish receives a label with assertive language (“redfish is funded in whole or in part by the Russian government”); YouTube does not explain this discrepancy. 

Conclusions

Platforms’ state-media labeling policies indicate a willingness to recognize and address foreign actors’ influence. Though the efficacy of platforms’ labeling practices has yet to be determined, their use is a template for future policies intended to combat state-media influence. On the whole, we recommend platforms take the following steps to ensure labels reach as wide an audience as possible:

  • Each platform should detail the criteria used to differentiate between state and non-state-media affiliates;

  • State-media specific icons should be included with labels to enhance clarity;

  • New policies should be widely circulated well in advance of national elections; than months;

  • Labeling policies should be continued indefinitely, not selectively applied during election periods.

  • A list, database, or other collection of labeled accounts should be made publicly available.

In addition to these broad recommendations, we also suggest that the following platforms either amend or introduce these site-specific policies:

Facebook: Increase label visibility, add labels to Facebook Stories and Facebook Live, increase consistency of application, add labels in search results.

Twitter: No account-specific recommendations beyond the general recommendations at this time.

Instagram: Increase label visibility, add labels to Instagram Stories, IGTV, and Instagram Reels, increase consistency of application, add labels in search results, add labels to desktop interface.

YouTube: Add labels in search results. Add labels to the home-pages of state-affiliated media accounts.

TikTok: Add a transparent and consistent state-media labeling policy.

Given previous attempts to influence U.S. elections by state-affiliated media actors, the importance of labeling these actors effectively and consistently has only grown. The policy inconsistencies and omissions detailed here emphasize a need for these platforms to increase the vigilance with which they approach and apply these labels. 

Government pressure to streamline and expand label implementation may aid in facilitating some of our recommended changes. Introduced to the House on October 1, The Foreign Agent Disclaimer Enhancement (FADE) Act aims to add executive oversight to platforms’ content-focused application of state-media labels. However, even if FADE becomes law, platforms’ differing user interfaces and varied business models will likely require individualized change to their unique labeling practices.

Overall, increasing the consistency and transparency of platforms’ labeling practices will enhance ongoing efforts to mitigate the consequences that flow from covert state-media influence operations.    

Previous
Previous

Foreign vs Domestic: An Examination of Amplification in a Ballot Misinformation Story

Next
Next

Project Veritas #BallotHarvesting Amplification