How Google, Facebook, and Twitter plan to handle misinformation surrounding 2020 presidential election results

This post was originally published on this site

Our mission to help you navigate the new normal is fueled by subscribers. To enjoy unlimited access to our journalism, subscribe today.

Google, Facebook, and Twitter are preparing for an unprecedented hurdle they may face on the night of Nov. 3: Not knowing who won the 2020 presidential election. 

A massive number of voters are expected to vote by mail, at least partially driven by a desire to avoid contracting the coronavirus. But it’s still unclear whether all ballots will be counted by the night of the election. Any difficulties or delays could ultimately postpone election results by days or weeks, which could allow election misinformation and false claims of victory to go viral.

Google, Facebook, and Twitter will be under increased pressure to control election-related misinformation, which the three have historically struggled to police. Politicians, political campaigns, foreign actors, and even average users have long used the services to disseminate false claims about candidates, and in some cases undermine the credibility of this year’s election given its unique circumstances. 

The three companies recently announced new policies aimed at mitigating false claims of victory. Here’s what they plan to do.


Google is aiming to provide users with quick reliable information on the election results with help from partners. 

The search giant plans to promote information from partners like the Associated Press and the nonprofit Democracy Works in a box atop the search results page. The company also said it also has ranking protections in place to ensure that reports that claim early victory will not appear in the search results. 

“I have extreme confidence that the team will handle this algorithmically, but if a challenging piece of information slips through, our policies will allow us to take that down,” said David Graff, Google’s senior director of global policy and standards, during a call with reporters on Thursday.

On the advertising side, Google said it already has policies that prohibit advertisers from using doctored or manipulated media or false claims that could undermine voter participation or trust in the election. Any ads that violate that policy are removed from Google.


Facebook recently introduced a new policy directed at handling false claims of victory from candidates and political campaigns. What’s unclear is how the company will handle misinformation about election results from users. 

Last week, Facebook CEO Mark Zuckerberg announced that the company was partnering with Reuters and the National Election Pool to provide authoritative information about the results of the election. The information will be featured in the Voting Information Center, a hub on Facebook that provides users with information from authoritative sources. Facebook plans to proactively notify users when the election results become available.

If any candidate or campaign declares victory before the results are determined, Facebook will include a label telling users that the election has not yet been determined. The label also will direct people to the official results. 

But Facebook has yet to announce how it will handle user posts that spread misinformation about the election results. The company said it’s still finalizing those details.  


Twitter updated its civic integrity policy on Thursday, saying it plans to label or remove any misleading information about the election results or any disputed claims that could undermine the faith in the election itself. 

Twitter said it will evaluate a tweet’s potential to cause harm when determining whether it will be removed. Content that has the propensity to create specific harm will be removed, whereas tweets that mischaracterize or represent general harm will be labeled as such.

For tweets that are not removed, Twitter said labels may provide links to additional clarifications or explanations. Twitter may also warn other users before they share or like the tweet to alert them to the problematic content. The company also may reduce the visibility of the tweet or prevent it from being recommended.

More must-read tech coverage from Fortune: