Posts

According To Pearson/NORC Poll, Most Americans Think Misinformation Is A Problem

According to the results of a poll released by the Pearson Institution and Associated Press-NORC, 95% of Americans believe that misinformation regarding current events and issues to is a problem, with 81% saying it’s a major problem.

Additionally, 91% say that social media companies are responsible for the spread of misinformation, with 93% saying the same of social media users. More Americans said that they blame social media users, social media companies, and U.S. politicians for misinformation spreading more than the U.S. Government or other foreign governments. However, older adults are more likely to blame foreign countries than younger adults.

41% are worried they have been exposed to misinformation, but just 20% are worried they have spread it themselves. The poll, which involved 1,071 adults, found that younger adults are more likely to worry about possibly having spread misinformation more than older adults.

Lastly, most Americans felt that social media companies and users, the U.S. government, and U.S. politicians all share responsibility for dealing with the spread of misinformation.

The results of this poll shouldn’t be too surprising, as the threat and spreading of misinformation has grown exponentially during the rise of social media in the past decade.

In addition, major events have been at the center point of misinformation, such as elections, natural disasters, and the COVID-19 pandemic. Many people have had their opinions on the virus and vaccines effected due to the fake news that is swirling around them, which shows us that something as simple as a lie or exaggeration in an article can have massive, negative impacts.

Social media platforms have made attempts in the past to combat misinformation. Back in 2017, Facebook discussed some of the steps it was taking to limit this matter, such as updating fake account detection, identifying fake news while fact-checking organizations, and making it harder for parties guilty of misinformation spreading to buy ads. Facebook also assured users of easier reporting of fake news and improved news feed rankings.

Those improvements clearly haven’t done much, if anything at all. In 2020, Forbes reported on a study that found that Facebook was the leading social media site to refer to fake news over 15% of the time, while referring to hard news just 6%. It wasn’t a close margin between social media sites, either. Google came in with 3.3% untrustworthy versus 6.2% hard news, while Twitter had 1% untrustworthy versus 1.5% hard news.

Speaking to 60 Minutes, Facebook whistleblower Frances Haugen explained how the tech giant prioritized what content users would see on their news feeds, which helped led to the spread of misinformation that targeted fierce reactions.

“And one of the consequences of how Facebook is picking out that content today is it is — optimizing for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions.”

If you are worried about biting the bait on or spreading around misinformation, there are plenty of ways to train yourself to have a more keen eye. According to The Verge, looking at factors such as survey and infographic sources, quotes, names and keywords, and the time-sensitivity of an article can all help you in concluding whether or not there may be misinformation afoot.

You should also take the time to consider other details, such as who is providing the information and how the story is being presented by different media sources. The Verge also urges for readers to think about their own feelings— are you getting strong emotions from reading the article? Do you want to instantly share it? If articles are feeding into reactions more than emphasizing actual facts or information, then that could be a red flag.

Congress Questions Tech CEOs Over Role In Capitol Riot

Sundar Pichai of Google, Mark Zuckerberg of Facebook, and Jack Dorsey of Twitter all testified before two committees of the House of Representatives on “social media’s role in promoting extremism and the rampant spreading of misinformation” regarding the pandemic, Covid-19 vaccine, and election process.

Social Media Apps on iPhone

CEO’s Of Google, Facebook, And Twitter To Testify In Front Of Congress On Misinformation

This marks the first time the chief executives of Facebook, Google, and Twitter will be appearing before lawmakers since the Capitol riots and Covid-19 vaccine distributions.

Facebook App on Wood Background

Facebook News Ban In Australia Blocks Pages For Fire Services And Charities

Facebook made the sudden decision to block people from sharing the news in Australia, which has led to a multitude of government organization and service group pages to be completely removed from the social media platform.

Social Media Apps on iPhone

How Social Media Platforms Are Combating The Spread Of Misinformation

Without any real solid regulation from the federal government in terms of combating these falsehoods from spreading, it’s up to the platforms themselves to make its users aware when a story, tweet, post, etc. contains important information that may not be 100% correct.

Twitter App

Twitter Deletes 170,000 Accounts Linked To Spreading Misinformation About Covid-19

Twitter has announced this week that they have deleted over 170,000 accounts that have been tied to a Chinese state-linked operation that purposefully spread misinformation regarding Covid-19, the protests and politics in Hong Kong, as well as other current political issues worldwide. 

Twitter described around 25,000 of these accounts as the “core network” of the operation, meaning they were the bigger accounts with more followings. The other 150,000 accounts were used to amplify the messages tweeted by the core network. This would entail either Retweeting, liking, sharing, quote-tweeting, or emphasizing the original tweet so it can continue to spread around the platform. 

“In general, this entire network was involved in a range of manipulative and coordinated activities. They were Tweeting predominantly in Chinese languages and spreading geopolitical narratives favorable to the Communist Party of China (CCP), while continuing to push deceptive narratives about the political dynamics in Hong Kong,” the company wrote in a blog post.

Embed from Getty Images

Twitter continued on in their statement to state that these particular accounts were also linked to a Chinese state-backed operation that took place last year in order to spread misinformation about the Hong Kong protests specifically. The accounts from last year’s operation have since been taken down, but the resurgence of these new accounts regarding all things Covid-19 has caused the same issue to re-emerge. 

The Stanford Internet Observatory (SIO) recently performed an analysis of all these accounts in order to determine which ones were spreading the false information pertaining to Covid-19. This made it easier to find, target, and delete the accounts, analysts claim that these accounts have been working since the beginning of this pandemic back in March. 

The main content that was being spread on these accounts focused heavily on praising China’s initial response to the Covid-19 pandemic, and while most of the accounts had less than 10 followers and no biographies, the SIO concluded they had tweeted hundreds of thousands of times (around 350,000 to be more specific). 

Embed from Getty Images

“Narratives around COVID-19 primarily praise China’s response to the virus, and occasionally contrast China’s response against that of the U.S. government or Taiwan’s response, or use the presence of the virus as a means to attack Hong Kong activists. The English-language content included pointed reiterations of the claim that China – not Taiwan – had a superior response to containing coronavirus,” the SIO wrote in its analysis.

Some of the accounts that Twitter shut down this week were also tied to Russian and Turkish state-linked misinformation efforts. Within the ~170,000 accounts around 1,000 of them were Russian bot accounts linked to state-backed political propaganda advertisers in Russia. 7,300 of the accounts were linked to Turkey’s government and were primarily praising Turkish president Recep Tayyip Erdoğan. 

The Russian and Turkish accounts were found to have collectively tweeted over 40 million times before Twitter took them down. Twitter also announced in their blog post that they would be hosting a conference later this summer to  “bring experts, industry, and government together to discuss opportunities for further collaboration around removing deceptive state-backed social media campaigns.”

For accurate information regarding the coronavirus, one should never trust what they see on social media. Instead, go to the CDC’s website and get the information directly from the source.

Facebook App

Facebook To Begin Flagging Posts Containing False Information About Covid-19

Facebook is taking some major initiative to prevent the spreading of misinformation regarding the coronavirus pandemic. The platform announced that within the next few weeks users who have previously liked, reacted to, or commented on posts that are considered to be “harmful misinformation” about Covid-19 will be directed to information from sources like the World health Organization, who have a legit authoritative standing when it comes to the pandemic. 

Users and tech-experts alike were relatively shocked by this recent development from Facebook, especially considering executives working for the company have been quite adamant in the past about not monitoring the information that’s spread across the platform. 

Embed from Getty Images

“We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook. The notifications will apply only to Facebook and not our other platforms like Instagram and WhatsApp,” wrote Guy Rosen, Facebook’s vice president of integrity.

Social media in general is known for the amount of “fake news” that gets easily spread around within seconds of it being posted. So from the beginning of this entire pandemic, dangerous lies regarding Covid-19 began circulating on platforms like Facebook especially due to the fact that they normally don’t have a system for filtering out posts containing misinformation. 

However, because of the fact that this is a global pandemic, platforms such as YouTube and Twitter have joined Facebook in trying to flag as many fake posts as possible. In March alone Facebook claims to have flagged over 40 million posts from around the world that contained “false information” regarding the coronavirus. 

Embed from Getty Images

Facebook included a visual of what users who have interacted with false posts would see on their feeds. The visual showed a design that basically nudges users to click on resources from WHO specifically if they have concerns regarding the pandemic. The company is still tweaking the model as well and a spokesperson claims that Facebook will continue to “iterate on these designs.”

However, critics of Facebook aren’t satisfied with this minimal first step in combating the spread of lies regarding Covid-19, especially considering this is literally a life or death situation. This “lax moderation” issue has been around for a while with Facebook especially, which was criticised just this past fall for spreading lies regarding immigration and politics in general. 

“[T]he company has taken a key first step in cleaning up the dangerous infodemic surrounding the coronavirus, but it has the power to do so much more to fully protect people from misinformation. [We’ve] been pushing for stronger fact-checking and for corrections to be issued more broadly on the platform, not just on content about Covid-19. New research commissioned by our organization shows that Facebook corrections have a major impact in shaping users’ views and can effectively reduce people’s belief in misinformation by 50%,” wrote Fadi Quran campaign director at nonprofit activist group Avaaz.

Social media platforms are widely used and easily accessible for practically anyone around the world, so it’s hard for these companies to remove every little bit of false information that’s spread on its platforms. So for now, it’s up to us, the users, to check the sources we’re getting our information from. When in doubt, just check the World Health Organization/Centers For Disease Control websites to get directed to the most accurate information regarding Covid-19. 

Facebook App

To Curb Disinformation, Facebook will Ban Deepfakes

In an era of widespread political disinformation deployed by bad actors to influence democracies, Facebook has faced criticism for its policy of allowing advertisers to use its platform to spread false messages, particularly about politics. Facebook CEO Mark Zuckerberg has argued that such a policy is necessary to protect free speech, and that he doesn’t see Facebook’s role as one of censoring political messages. Certainly, the question of how to handle the spread of disinformation on social media networks is a tricky one, particularly during a time when the president’s reelection campaign overtly makes false claims in social media ads, most notably on Facebook, to influence the voting public. 

Rival social network Twitter has decided to address the problem by banning political ads on Twitter altogether, neatly sidestepping the issue by refusing to participate in it in any capacity. But even in the face of ongoing, intense criticism and action taken on other platforms, Zuckerberg has remained steadfast in his opinion, positioning Facebook as a platform that promotes free speech instead of one that polices the political views of its users. However, amid the intensity of the criticism directed at the social networking giant, Facebook has recently announced it would ban “deepfakes” on the site, an apparent concession to those who are worried about social media’s role in facilitating the spread of false information.

Embed from Getty Images

So-called “deepfakes” are the result of new technology, made possible by advances in machine learning using neural networks, that can appear to show evidence of a person saying or doing something that they did not in fact say or do by manipulating video to superimpose an image of one person’s face onto another person’s head with near-perfect accuracy. Such videos can be difficult or impossible to detect, even by experts, and as the technology advances deepfakes become even more convincing and easy to make. 

While these moves are certainly steps in the right direction, they are likely not enough to stop the spread of fake news

Researchers and political observers around the world have understandably voiced concerns about the potential impact of deepfakes on the spread of information, as the very existence of deepfakes causes one to call into question the legitimacy of videos depicting well-known political figures, which were once considered ironclad evidence of a person’s speech and conduct. To illustrate this point, director Jordan Peele created a deepfake that appears to depict President Obama delivering a warning about the spread of disinformation. Even more disturbingly, deepfakes have also been used to create pornographic videos appearing to depict various well-known celebrities, in violation of these celebrities’ rights to control how their images are used in public forums.

Embed from Getty Images

As the dangers that deepfakes pose to individuals and to society as a whole are clear, it’s no surprise that Facebook has taken the step of banning this type of video on its site. However, given the extent of the spread of misinformation online, this action alone is not nearly enough to ensure that bad actors cannot subvert democracies by spreading fake news. For one, deepfakes are difficult to detect, even using computer analysis; as such, Facebook launched the Deep Fake Detection Challenge in an attempt to improve the technology that can determine whether a video has been digitally manipulated. And while Facebook continues to allow the spread of falsehoods in the form of political advertisements, the company has also partnered with independent fact-checkers with the aim of informing users when they are encountering false information. 

While these moves are certainly steps in the right direction, they are likely not enough to stop the spread of fake news, especially given the level of sophistication exhibited by disinformation campaigns around the world, most notably Russia’s interference in the 2016 American presidential election and its likely interference in the upcoming election. While Facebook and other social media giants have learned some lessons from the election interference of the last several years, the rapid pace of technological advancement ensures that the fight against disinformation will not end anytime soon.

Layoff

New Owners Of Sports Illustrated Cut 25% Of Employees

It’s no surprise that the art of print journalism is dying. With technology being as advanced as it is now, newspapers and magazines are trying to keep up with the multitude of digital sources out there. Sports Illustrated magazine is the latest victim of this journalism epidemic. The magazine has seen an extreme decline in readers/subscribers, within the past year especially, causing higher ups in the company to make a serious amount of layoffs. 

Sports Illustrated themselves declined to give a comment to any news source about how many layoffs would be occurring, however, according to the Wall Street Journal, the magazine is laying off 25% of its staff, which would total around 40 individuals out of 160 employee’s. The move came shortly after TheMaven Inc. licensed the rights to all Sports Illustrated digital and print publications. The layoffs are a part of a larger plan to get the magazine back on the right track and avoid the quickly dying field of print media. Another part of the plan includes TheMaven hiring 200 contract writers to cover current sports events and news, also according to the Wall Street Journal. 

Meredith Corporation owned the rights to the magazine before hand and recently sold it to Maven in June, telling its employees that with this transition would come layoffs, and they didn’t disappoint on their promise. Writing employee’s were nervous walking into the office this Thursday as a meeting with the new editors in chief was supposed to take place to inform employees who still has a job and who doesn’t, however, that meeting was delayed until the late evening. Writers weren’t the only group that should’ve been nervous though, as they soon learned Maven was replacing Christian Stone, the editor and chief of seven long years, with Steve Cannella and Ryan Hunt from Maven Inc. 

This major transition sparked a lot of outrage within the Sports Illustrated employee’s. Long time writers and higher ups were being fired after years of service and dedication only to be replaced by less experienced individuals just because it’s cheaper, unfortunately not uncommon in the sphere or corporate America work. In response, many Sports Illustrated employees wrote, signed, and sent a petition to Meredith Inc. According to the Wall Street Journal, the petition called on the companies to “drop TheMaven and save Sports Illustrated.”

“TheMaven wants to replace top journalists in the industry with a network of Maven freelancers and bloggers, while reducing or eliminating departments that have ensured that the stories we publish and produce meet the highest standards,” the petition said. 

Unfortunately this isn’t an uncommon issue, and Sports Illustrated has already seen major cutbacks made before, the economy for writers is just not what it used to be. With the current war on journalism and “fake news” epidemic, more and more individuals are choosing to turn to social media platforms for their news source. No one goes out and buys a physical magazine or newspaper when they can have the same article on their phones within seconds if they wanted and while companies like Sports Illustrated have completely digitized, there’s still a major struggle to get actual clicks. The clicks that articles get gives them views which increases advertising and pays for the publications. Until there’s more of a boom in the click business, major company changes such as the one Sports Illustrated as endure will continue. 

Judges Gavel

European Court Rules that Countries can Force Facebook to Delete Content

On Thursday, Europe’s top court ruled that countries can force Facebook to delete content and restrict access to information globally, in a ruling that allows countries to ban access to information outside of their own borders. The decision came after a former Austrian politician sued in an attempt to force the social media company to take down negative commentary that had been posted about her on the site by individual users. The politician, Eva Glawisching-Piesczek, successfully argued that the company is obligated to restrict access to this information around the world, setting a legal precedent which empowers nations to essentially remove information from the internet at will.

As standards for privacy, defamation, and libel vary from country to country, this ruling has wide-reaching implications for how information can be regulated on the Internet, a platform which is by its very nature global and resistant to any one regulatory body. As it is nearly impossible to create a single set of standards for what information should be allowed on a global level, this ruling instead allows nations to enforce their own standards on a global level, concerning advocates for free speech who fear the ruling will lead to mass censorship of legitimate political discussion. Facebook strongly rebuked the ruling, claiming the judgment “undermines the longstanding principle that one country does not have the right to impose its laws on speech on another country.”

Embed from Getty Images

Facebook, which is a company based in the United States, nonetheless has to obey the laws of all of the nations in which it operates. The ruling draws attention to the difference in philosophy between the regulation of information in the United States, which takes an almost entirely hands-off approach, and Europe, which is more likely to compel companies like Facebook to restrict access to information. A controversial privacy law in Europe, dubbed “the right to be forgotten,” allows European citizens to compel search engines like Google to remove links to their personal data from search results. No equivalent law exists in the US, and the European Court of Justice last week ruled that this law generally applies only within the European Union.

Facebook represents the public face of the spread of information during an era in which changes in how information spreads around the world has strongly influenced global politics. A report issued by the U.S. Department of Justice in March of this year found widespread interference in the integrity of American elections by Russian operatives, who leveraged social media sites like Facebook and Twitter to spread false information and to release stolen documents in a deliberate effort to favor one political party over another. This interference, conducted by a number of countries, is likely to continue and intensify during the 2020 US election and has been used in propaganda efforts at an unprecedented scale around the world. As a growing percentage of the US population gets their news from social media as opposed to more traditional and reputable news outlets, the electorate is increasingly likely to be unknowingly swayed by information that is propagated by a foreign power with the intent of undermining the integrity of elections.

Embed from Getty Images

Facebook, for their part, has announced plans to take stronger preventative measures to deter the spread of fake news on their platform. Though the company was arguably complicit in allowing Russian interference in 2016 by taking a hands-off approach to the content of advertisers, the company has implemented plans to identify and label fraudulent activity on the site and has implemented stricter policies for what type of advertising they allow. That being said, the company has chosen not to limit the speech of politicians who advertise using the platform, even when they lie or break rules, as they claim that it’s “not [their] role to intervene when politicians speak.” Despite calls from Democratic contender Kamala Harris and others to ban Donald Trump for breaking the social media site’s rules, Twitter has taken a similar approach, allowing the President to repeat a false narrative that alleges corruption of his political rival Joe Biden.

The European Court’s ruling is just one of the conundrums Facebook and other social media platforms find themselves in with regards to regulating the spread of information around the world. As corporate entities, these platforms have virtually unlimited power to censor the speech they allow their users to circulate. However, there’s no denying that in recent years social media platforms have become more akin to a “public square” than a traditional publisher of information, which suggests they have a responsibility both to allow the free and open exchange of ideas and to curtail speech that poses a clear and present danger. How they, and the governments which have the power to regulate them, manage that responsibility is an ongoing question whose answer ultimately remains to be seen.