Twitter Adds Labels to Posts Spreading Falsehoods About COVID-19 Vaccines

Twitter announced Monday it will expand its enforcement system against misinformation about COVID-19 vaccines. The first step is adding labels to misleading claims into users' timelines, similar to the labels it applies to false claims about election fraud.

Twitter seen on smartphone
The logo of social network Twitter is displayed on the screen of a smartphone. The company announced new rules on Monday regarding the spread of COVID-19 vaccine misinformation. LIONEL BONAVENTURE/AFP via Getty Images

Readers will now see the new notices warning of content that "may be misleading" and be linked to vetted public health information. The labels began appearing after Twitter's announcement yesterday, which came in the form of a corporate blog post, and are being carried out by human moderators as opposed to automated moderation systems. (Twitter said it plans to eventually introduce AI systems to work with humans in the effort.) These labels also show up as pop-up messages in the retweet window.

"As health authorities deepen their understanding of COVID-19 and vaccination programs around the world, we will continue to amplify the most current, up-to-date, and authoritative information," Twitter's post read.

The labels are just one part of the company's new fight against the spread of vaccine misinformation. Twitter also introduced a new strike system for violations of its pandemic-related rules that also goes into effect immediately. A "strike" in this instance refers to being flagged for posting or sharing misleading or fraudulent claims. No penalty will occur after one strike, but once a user gets two strikes, they will be locked out of their account for a 12-hour period. The same 12-hour freeze happens after a third strike. But if they reach four violations, access to their account is taken away for a period of one week. Five strikes can result in permanent suspension.

Twitter began banning tweets spreading false information about the pandemic in March of last year. More rules went into effect in December regarding content that promoted what the company deemed conspiracy theories regarding the COVID-19 vaccines. Users who tweeted such misinformation at that time about the vaccines were notified via email to delete the tweet (or make an appeal) and could not post another tweet until the content was removed. (When those rules went into effect, Twitter acknowledged that context like account history would be considered before taking enforcement determinations.)

Twitter now joins Facebook in trying to stop the spread of misinformation about the coronavirus and its vaccines. On February 8, Facebook announced it was expanding its list of what it considered false claims about COVID-19. The social media network also warned pages, groups, and personal accounts which shared these claims risked permanent removal.