How Social Media Platforms Are Combating The Spread Of Misinformation

Misinformation has been one of the biggest threats to America’s Democracy, and a major part of the blame is being placed on the platforms that let the information spread.

Embed from Getty Images

The internet is one of the most vast resources that we have access to. It’s power is endless, and thus, it’s ability to spread misinformation is boundless. Without any real solid regulation from the federal government in terms of combating these falsehoods from spreading, it’s up to the platforms themselves to make its users aware when a story, tweet, post, etc. contains important information that may not be 100% correct. 

This has been especially true within the past year in terms of the Covid-19 pandemic and the 2020 presidential election. Any Twitter user is probably very familiar at this point with the fact that almost all of the current president’s tweets get flagged for containing misinformation regarding the election; Trump has continuously claimed that he won the election and has made dozens of baseless claims of voter fraud, all of which have been flagged. 

It’s not just the president though, QAnon conspiracy theories and false headlines spread like wildfire on these platforms, and more times than not users aren’t even fully reading the article’s they’re sharing, so they aren’t even aware of the fact that they may be spreading harmful and false information. Social media platforms have all received a major amount of criticism within the past few years for not acting on these threats of “fake news” and allowing harmful pieces of false media to exist for long periods of time on their networks. Jim Steyer is the founder and chief executive officer for Common Sense Media, a children’s online safety non-profit, and he recently spoke with the press about how all these new measures are “too little, too late,” especially after what we’ve seen with this election. 

“They have allowed the amplification of hate, racism and misinformation at a scale unprecedented in my lifetime.”

The 2020 election in general became a major breaking point in this battle, and prompted some of the internet’s most powerful platforms to take action on the spreading of misinformation especially in terms of America’s democracy and the global health crisis. Facebook was one of the most criticized platforms, and as a result introduced a list of new security policies for itself and Instagram (which is owned by Facebook). 

The company put a ban on any content that sought to intimidate voters or interfere with how they were voting, this was a direct result of Trump encouraging his supporters to “watch” people at polls. Since October 10th it also featured information panels and videos at the top of its news feed for all things election and voting related so users could easily register/figure out how they could vote. The platform claims it helped over 2.5 million people register to vote this year. 

Facebook also claimed that it would cease accepting any political advertisements a week before the election and would indefinitely stop running the ads on November 3rd; which it has. It’s flagged any posts coming from Trump and his administration that claim he won his reelection, and would provide links on the flagged posts so users could access legitimate information regarding the election and voting from credible sources. Twitter followed Facebook in a lot of these policies and also provided a voter information center at the top of its new feed throughout the entire election season. The platform has also been adamant about flagging any and all tweets from any user that contain false information regarding who won and voter fraud. If a tweet is flagged, other users are unable to retweet it as well.

Embed from Getty Images

“The goal is to remove all false information intended to undermine public confidence in the civic process, and misleading claims about the outcome of an election.”

Embed from Getty Images

YouTube also took action this election season, despite its reputation for allowing its users to post videos saying and doing whatever they want, regardless of if it’s true or not. The platform pledged to “remove content that has been technically manipulated or doctored in a way that misleads users (beyond clips taken out of context) and may pose a serious risk of egregious harm.” The biggest example of this was a video of House Speaker Nancy Pelosi that was manipulated to make her seem intoxicated. 

The platform has also banned content that they know contains hacked information that could “interfere with certain democratic processes such as the election and censuses.” They also claimed that they will remove content “encouraging others to interfere with democratic processes,” as a response to certain videos circulating that told voters to go and create longer lines on purpose to make it hard for individuals voting the opposite way to cast their ballot. 

YouTube also created its own voter information center on its homepage that made registration for its users easy, as well as provided a multitude of resources for first-time voters, or individuals who just wanted to be further educated on the democratic process. 

TikTok has also taken some major steps to ensure that all information on the app regarding the Covid-19 pandemic was accurate, and has provided an informational link that users can click on to learn all the facts that we know about the coronavirus. The platform partnered with the World Health organization to create pop ups on every post that mentions any keyword regarding the pandemic; Covid, corona, pandemic, lockdown, quarantine, etc. It also expanded that partnership to also flag videos regarding the election so users could ensure they weren’t receiving false information.