Twitter Expands Warning Labels To Slow Spread of Election Misinformation
Expect to see more prominent warning labels on Twitter that make it harder to see and share false claims about the election and the coronavirus, the company said on Friday.
This is the latest step that Twitter is taking to prevent the spread of deliberate misinformation as voters cast their ballots amid a pandemic. Like Facebook and other social media platforms, Twitter has announced a cascade of new rules to stop a flood of hoaxes and false claims aimed at misleading voters.
The social media company will more aggressively limit the impact of posts it labels misleading. Most notably, it will hide tweets with false claims from American political figures, candidates or parties and other high-profile U.S. users behind warning screens. Users will have to click past the warnings to read these tweets.
"Some or all of the content shared in this Tweet is disputed and may be misleading," the warning will read. That label will also will appear prominently above the tweet, once users click past the warning screen.
It will be harder for such tweets to spread, too. Users won't be allowed to reply to them or retweet without adding comment. And the tweets will not be recommended by Twitter's algorithms, meaning users won't see them in their main timelines.
"We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets," Twitter executives Vijaya Gadde and Kayvon Beykpour wrote in a blog post on Friday.
Despite creating more intricate rules designed to stop misinformation, Twitter has been reluctant to remove posts in most cases. It previously used these kinds of warnings on tweets that violated its rules but which it determined should remain online because of public interest, including abusive posts from political leaders and harmful tweets about the coronavirus.
The expanded use of warning labels is likely to have a visible impact on one of Twitter's most prolific and controversial users: President Donald Trump. He has repeatedly made false claims, including about mail-in voting, that Twitter has labeled as misleading. Under the new policy, more of his posts could be hidden behind warning labels and have their views reduced.
With less than a month to go until election day, social media companies are increasingly alarmed at the potential their platforms will be used to manipulate or intimidate voters, or undermine the legitimacy of the election.
Both Twitter and Facebook have struggled to curb the viral spread of misinformation and hoaxes, which often proliferate widely before fact checks and corrections can catch up.
There are some lines Twitter says users cannot cross. On Friday, the company clarified that it would take down posts that try to interfere with the election process or its aftermath, including calls for "violent action."
It gave more details on plans to label posts that claim victory before election results are final. It will direct users to official information about the election, and only consider a race "authoritatively called" if it has been announced by state election officials or in independent, public projections from at least two "authoritative, national news outlets."
Earlier this week, Facebook said it too would crack down on voter intimidation, including removing posts that use "militarized language" in urging people to monitor polling places. Concerns have been growing over possible confrontations after Donald Trump Jr., the president's son, posted a video on social media calling for people to join an "Army for Trump." Facebook also plans to label premature claims of victory.
Other measures Twitter announced on Friday encourage users to think before posting. If a user tries to retweet something labeled as misinformation, she will be shown an alert directing her to "credible information about the topic" before she can continue.
The changes to how misleading information is displayed and shared — whether from high-profile figures or everyday users — go into effect next week and are permanent.
Some additional restrictions will take effect on October 20 and extend at least until the end of election week.
During that time, Twitter will temporarily prompt users to "quote tweet" — adding their own commentary — rather than simply retweet a post. It will also stop recommending tweets from people users do not already follow, a step meant to slow viral amplification.
And it will make changes to the trends it recommends to U.S. users, adding a description to explain why a given term is trending.
"This will help people more quickly gain an informed understanding of the high volume public conversation in the U.S. and also help reduce the potential for misleading information to spread," the company said.
Copyright 2020 NPR. To see more, visit https://www.npr.org.