The coronavirus pandemic has been an unprecedented and terrifying time for many of us. As governments put whole countries into lockdown our only means of information and communication was via digital channels. Whilst news programs, radio shows, and online articles can be reliable forms of information, depending on the source, much information is spread via social media platforms. Worldwide accessibility to the internet means that information can now travel at an exponential rate, however this also means that misinformation can also very quickly spread.
The term “fake news” has been a predominant phrase in the last few years. In many ways, social media platforms such as Facebook, Twitter, Instagram and so forth are major players in the spread of misinformation. Misinformation can both be relatively harmless and catastrophically damaging. This is particularly exemplified in a global pandemic, where false information can be life-threatening, whether this is touting drugs that are the supposed cure for coronavirus, to anti-vaccination “facts”, to dispelling official social distancing guidelines. This could be the difference between catching the virus and developing a life-threatening case or staying safe.
Forbes reported: ‘according to the new study published in misinformation review, people who get their news mainly from social media are more likely to believe falsehoods about coronavirus. They are also less likely to practice social distancing or to think Covid-19 is a threat. Conversely, those who get their news from more traditional news media are more likely to follow public health recommendations. “There is growing evidence that misinformation circulating on social media poses public health risks,” says co-author Taylor Owen, an Associate Professor at McGill University in a press release. According to their analysis of millions of tweets and thousands of news articles, false or inaccurate information about coronavirus is far more likely to be circulated on social media than in the traditional news media.’
Several campaigns in recent years have encouraged people to look into the sources they are gleaning information from and examining their accuracy. This could be from not sharing articles with misleading headlines if they have not read the content, to examining claims on a deeper level. Social media networks have been called upon by campaign groups and organisations to better handle and tackle misinformation on their platforms. In the year of the coronavirus pandemic this is of utmost importance. Doctors have frequently commented on how the spread of misinformation about COVID-19 is a very real threat to patient health and channels such as Facebook are failing to keep people safe. According to CNN, ‘the spread of misinformation peaked at an estimated 460 million views on Facebook in April 2020, right as the pandemic escalated around the world, according to a report about Facebook from Avaaz, a non-profit civil society group.’
A recent article from the BBC, showed that in many cases these firms were not swiftly cracking down on misleading posts during the pandemic. UK organisation, The Center for Countering Digital Hate (CCDH) said, that many anti-vaccination posts and misinformation were not taken down by leading social media channels. The BBC reported:
‘CCDH said a total of 912 items posted by anti-vaccine protesters that it had judged to have fallen foul of the companies’ Covid-19 rules were flagged to the firms between July and August.
Facebook received 569 complaints. It removed 14 posts, added warnings to 19 but did not suspend any accounts – representing action on 5.8% of the cases
Instagram received 144 complaints. It removed three posts, suspended one account and added warnings to two posts – representing action on 4.2% of the cases
Twitter received 137 complaints. It removed four posts, suspended two accounts, and did not add any warnings – representing action on 4.4% of the cases
YouTube received 41 complaints. It did not act on any of them’
According to the report, posts such included: ‘an Instagram post saying the “upcoming coronavirus vaccine is a killer” that would cause “DNA-level damage”; a Facebook post claiming the only way to catch a virus was to be “injected with one via a vaccine”; a tweet saying that vaccines cause people to become genetically modified and thus no longer the way God intended; a YouTube clip in which an interviewee claims that Covid is part of a “depopulation agenda” and that vaccines cause cancer.”’ Many of these posts apparently references conspiracy theories. Whether or not a person is for or against vaccinations, this sort of false information is dangerous and misleading for those who are trying to make their own decisions on the matter.
In response, these companies have said that they have taken many steps to remove the spread of misinformation, Facebook having added warning labels to potential false or misleading posts, having also removed over 7 million items altogether. Twitter argued that it prioritized false items that could cause harm, Google introduced fact checking panels and band some clips. However, campaigners are calling for even more action to be taken considering the spread of misinformation as virally dangerous as its physical coronavirus counterpart.