White House Counselor Kellyanne Conway says President Donald Trump is the most “prolific user” of Twitter and should not be fact-checked. (May 27)
President Donald Trump threatened Wednesday to close down social media platforms that “silence conservatives,” a day after Twitter for the first time added a fact check to one of his posts.
Trump posted two tweets alleging, without evidence, that expanded mail-in-voting could not be “anything less than substantially fraudulent” and would lead to a “Rigged Election.” Twitter added a warning label at the bottom of the tweets and link reading “Get the facts about mail-in ballots.” The link takes readers to a page with information on voting by mail and posts related to fact checks on Trump’s fraud claims.
This does not make us an “arbiter of truth.” Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves. More transparency from us is critical so folks can clearly see the why behind our actions.
— jack (@jack) May 28, 2020
Social media platforms’ handling of misleading posts have been under increasing scrutiny since the 2016 election, when the Russian government mounted a campaign to divide and misinform U.S. voters as part of its larger effort to influence the election. Officials and experts have warned Russia and other foreign actors are trying to do the same thing as the 2020 election nears.
“We have a different policy I think than Twitter on this,” Facebook CEO Mark Zuckerberg told Fox News in an interview airing Thursday. “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online. I think in general private companies, especially these platform companies, shouldn’t be in the business of doing that.”
Twitter CEO Jack Dorsey shot back that the fact-check added to Trump’s tweets “does not make us an ‘arbiter of truth.'”
“Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves. More transparency from us is critical so folks can clearly see the why behind our actions,” Dorsey said.
Dorsey said his company was simply enforcing its “civic integrity” policy against users sharing misleading content about the election because Trump’s tweets could “mislead people into thinking they don’t need to register to get a ballot.”
Each company has responded to the problem in various ways.
In February, Twitter announced it would begin labeling tweets that contain “synthetic and manipulated media.” In addition to labels and warnings, Twitter said it would reduce the visibility of tweets sharing altered photos or videos, and sometimes provide additional context and information. Tweets that shared media determined “likely to cause harm” would be subject to removal.
In response to the coronavirus pandemic, Twitter said March 16 it was broadening its definition of harm to include “content that goes directly against guidance from authoritative sources of global and local public health information.”
‘Manipulated media’: Twitter uses label for first time after Trump retweets edited video clip of Biden
On May 11, the company declared it was adding more labels and warnings to posts related to COVID-19 in order “to provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content.”
And on May 20, Twitter added its civic integrity policy, which says the platform can’t be used for “manipulating or interfering in elections or other civic processes.”
“This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process,” the policy says.
Though Trump’s mail-in ballot tweets were not explicitly about the coronavirus, expanded vote-by-mail has been endorsed by Democratic and Republican governors who want to give their residents the opportunity to vote without risking the potential exposure to the virus that could come with in-person voting.
Dorsey explained that Trump’s tweets could have confused voters about whether they need to register to vote in order to receive a mail-in ballot and that was why they violated the new policy on election content.
.@Twitter is now interfering in the 2020 Presidential Election. They are saying my statement on Mail-In Ballots, which will lead to massive corruption and fraud, is incorrect, based on fact-checking by Fake News CNN and the Amazon Washington Post….
— Donald J. Trump (@realDonaldTrump) May 26, 2020
“Republicans feel that Social Media Platforms totally silence conservatives voices. We will strongly regulate, or close them down, before we can ever allow this to happen,” Trump tweeted Wednesday. Many Trump supporters have echoed the president’s accusation that the move confirms Twitter is biased against conservatives.
But others have called the company’s decision a good first step to counterfalse statements or accusations Trump has made on Twitter. Critics say it does not go far enough, pointing to the president’s recent posts making unfounded allegations of murder against a cable news host Joe Scarborough and have called for Trump to be suspended from the platform altogether.
Trump and Scarborough: Widower of late Scarborough staffer asks Twitter to remove Trump tweets, Twitter says no
Despite Zuckerberg’s assertion that Facebook and Twitter have different policies, both companies prohibit posts that mislead people about elections.
Facebook’s policy says it will not tolerate content that misrepresents the “dates, locations, times and methods for voting or voter registration,” or “who can vote, qualifications for voting, whether a vote will be counted and what information and/or materials must be provided in order to vote.”
“We remove this type of content regardless of who it’s coming from,” the company said.
Facebook has instituted a number of measures aimed at “fighting the spread of false news.” The company has engaged third-party fact-checkers, sought to reduce the financial incentive for spammers to share misinformation and allowed users to report misleading content when they think they see it.
Rather than remove misleading content, the company’s response is to cut back on the number of people seeing it, even in the case of repeat offenders. Many conservative commentators, including controversial Trump supporters Diamond and Silk, have accused the site of disproportionately limiting the distribution of conservative content.
Like Twitter, Facebook labels media that has been manipulated and removes posts it believes can do harm. And it has included misinformation about the coronavirus outbreak in its definition of harmful content. The platform informs users if they interact with harmful misinformation about the pandemic.
But according to Facebook’s policy, “posts and ads from politicians are generally not subjected to fact-checking.”
If a politician shares content “that has been previously debunked on Facebook” the company “will demote that content, display a warning and reject its inclusion in ads.”
But if “a claim is made directly by a politician on their Page, in an ad or on their website, it is considered direct speech and ineligible for our third party fact checking program –even if the substance of that claim has been debunked elsewhere.”
Real Life. Real News. Real Voices
Help us tell more of the stories that matterBecome a founding member
The company, which has been criticized for that relatively hands-off approach to political content, explained the stance was rooted in its concerns that “by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words.”
Instagram is owned by Facebook and it primarily relies on the same system of fact-checking and labeling Facebook uses to address misinformation.
If the fact-checkers determine something is false or partly false, that content is removed from the site’s hashtag and “Explore” pages and its visibly in people’s feeds is reduced.
In response to the coronavirus, the platform only recommends accounts posting on the outbreak from credible health organizations.
“We also remove false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities as having the potential to cause harm to people who believe them,” the platform says.
YouTube, which is owned by Google, removes misinformation that is deemed harmful and violates its community guidelines. But the site admits it is still wrestling with how to “reduce the spread of content that comes close to – but doesn’t quite cross the line of –violating” those guidelines.
In January 2019, the company announced it would “begin reducing recommendations of borderline content and content that could misinform users in harmful ways – such as videos promoting a phony miracle cure for a serious illness, claiming Earth is flat, or making blatantly false claims about historic events like 9/11.”
In February, YouTube said that policy led to a “70% average drop in watch time of this content coming from non-subscribed recommendations in the U.S.”
And in April, the company said it was expanding its use of “fact check information panels” to include the U.S.
Contributing: Savannah Behrmann
Twitter reacts to a request from the widower of one of Joe Scarborough’s staffers, accusing President Trump of perverting the memory of his dead wife.
Read or Share this story: https://www.usatoday.com/story/news/politics/2020/05/27/social-media-platforms-different-approaches-misinformation/5265288002/
Find New & Used Cars
Subscribe to the newsletter news
We hate SPAM and promise to keep your email address safe