Posts

Almost 80% Of Americans Have Been Exposed To Misinformation Online Regarding Covid-19, Survey Says

Between social media and the plethora of news outlets reporting on the Covid-19 pandemic, many Americans aren’t sure what information to believe. New data from the Kaiser Family Foundation found that nearly 80% of Americans surveyed said they had heard of at least one of the falsehoods perpetuated by online misinformation and either believed it, or were unsure whether or not it was true. 

“Most commonly, six in ten adults have heard that the government is exaggerating the number of Covid-19 deaths by counting deaths due to other factors such as coronavirus deaths and either believe this to be true (38%) or aren’t sure if it’s true or false (22%).”

Embed from Getty Images

“One-third of respondents believe or are unsure whether deaths due to the Covid-19 vaccine are being intentionally hidden by the government (35%), and about three in ten each believe or are unsure whether Covid-19 vaccines have been shown to cause infertility (31%) or whether Ivermectin is a safe and effective treatment for COVID-19 (28%),” the authors wrote.

The survey also found that “between a fifth and a quarter of the public believe or are unsure whether the vaccines can give you COVID-19 (25%), contain a microchip (24%), or can change your DNA (21%).”

Outlandish ideas such as vaccine microchips, trackers, or changes to DNA have been reported by “trusted” media outlets and have made a vast impact on many Americans in their choice to get vaccinated or not. 

“People’s trusted news sources are correlated with their belief in COVID-19 misinformation. At least a third of those who trust information from CNN, MSNBC, network news, NPR, and local television news do not believe any of the eight false statements, while small shares (between 11% and 16%) believe or are unsure about at least four of the eight false statements.”

Embed from Getty Images

These results prove that traditional sources of media are helping people separate facts from falsehoods. However, Republicans have made it clear that sources such as CNN and NPR are not to be trusted. 

The survey found that “nearly 4 in 10 of those who trust Fox News (36%) and One America News (37%), and nearly half (46%) of those who trust Newsmax, saying they believe or are unsure about at least half of the eight false statements.”

The researchers cautioned, however, that “whether this is because people are exposed to misinformation from those news sources, or whether the types of people who choose those news sources are the same ones who are pre-disposed to believe certain types of misinformation for other reasons, is beyond the scope of the analysis.”

Post reporter Aaron Blake followed up with Kaiser and concluded that the overall numbers “obscure just how ripe the right is for this kind of misinformation. That’s because, “in most cases, if you exclude Republicans who haven’t heard the claims and focus on just who is familiar with them, a majority of them actually believe the claims.”

David Leonhardt of The New York Times wrote “Covid vaccines are remarkably effective at preventing severe Covid, and almost 40 percent of Republican adults remain unvaccinated, compared with about 10 percent of Democratic adults. In the Kaiser research, unvaccinated adults were more likely than vaccinated adults to believe four or more of the eight false statements.”

According To Pearson/NORC Poll, Most Americans Think Misinformation Is A Problem

According to the results of a poll released by the Pearson Institution and Associated Press-NORC, 95% of Americans believe that misinformation regarding current events and issues to is a problem, with 81% saying it’s a major problem.

Additionally, 91% say that social media companies are responsible for the spread of misinformation, with 93% saying the same of social media users. More Americans said that they blame social media users, social media companies, and U.S. politicians for misinformation spreading more than the U.S. Government or other foreign governments. However, older adults are more likely to blame foreign countries than younger adults.

Embed from Getty Images

41% are worried they have been exposed to misinformation, but just 20% are worried they have spread it themselves. The poll, which involved 1,071 adults, found that younger adults are more likely to worry about possibly having spread misinformation more than older adults.

Lastly, most Americans felt that social media companies and users, the U.S. government, and U.S. politicians all share responsibility for dealing with the spread of misinformation.

The results of this poll shouldn’t be too surprising, as the threat and spreading of misinformation has grown exponentially during the rise of social media in the past decade.

In addition, major events have been at the center point of misinformation, such as elections, natural disasters, and the COVID-19 pandemic. Many people have had their opinions on the virus and vaccines effected due to the fake news that is swirling around them, which shows us that something as simple as a lie or exaggeration in an article can have massive, negative impacts.

Social media platforms have made attempts in the past to combat misinformation. Back in 2017, Facebook discussed some of the steps it was taking to limit this matter, such as updating fake account detection, identifying fake news while fact-checking organizations, and making it harder for parties guilty of misinformation spreading to buy ads. Facebook also assured users of easier reporting of fake news and improved news feed rankings.

Embed from Getty Images

Those improvements clearly haven’t done much, if anything at all. In 2020, Forbes reported on a study that found that Facebook was the leading social media site to refer to fake news over 15% of the time, while referring to hard news just 6%. It wasn’t a close margin between social media sites, either. Google came in with 3.3% untrustworthy versus 6.2% hard news, while Twitter had 1% untrustworthy versus 1.5% hard news.

Speaking to 60 Minutes, Facebook whistleblower Frances Haugen explained how the tech giant prioritized what content users would see on their news feeds, which helped led to the spread of misinformation that targeted fierce reactions.

“And one of the consequences of how Facebook is picking out that content today is it is — optimizing for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions.”

If you are worried about biting the bait on or spreading around misinformation, there are plenty of ways to train yourself to have a more keen eye. According to The Verge, looking at factors such as survey and infographic sources, quotes, names and keywords, and the time-sensitivity of an article can all help you in concluding whether or not there may be misinformation afoot.

You should also take the time to consider other details, such as who is providing the information and how the story is being presented by different media sources. The Verge also urges for readers to think about their own feelings— are you getting strong emotions from reading the article? Do you want to instantly share it? If articles are feeding into reactions more than emphasizing actual facts or information, then that could be a red flag.

Facebook Postpones “Instagram For Kids”

Following sharp backlash from parents, users, and lawmakers, Facebook has announced that it is pausing their latest venture: “Instagram Kids,” a spin-off of the photo-sharing app that would target tweens between the ages of 10-12.

In a statement published on their blog, Facebook explained that while the need to continue building their project remains, they will be working with those who were most vocal about Facebook’s planned platform:

“While we stand by the need to develop this experience, we’ve decided to pause this project. This will give us time to work with parents, experts, policymakers and regulators, to listen to their concerns, and to demonstrate the value and importance of this project for younger teens online today.”

Embed from Getty Images

The app had been in development since March and was set to be led by the head of Instagram Adam Mosseri and Facebook vice president Pavni Diwanji. Diwanji had previously been influential in Google’s launch of Youtube Kids back in 2015.

However, the titan of industry, which acquired Instagram in 2012, did not back down from the vast amount of criticism and admit failure. Instead, they defended their attempts at targeting a group that some might argue are the most vulnerable to the dangers and pressures of the online world:

“Critics of “Instagram Kids” will see this as an acknowledgement that the project is a bad idea. That’s not the case. The reality is that kids are already online, and we believe that developing age-appropriate experiences designed specifically for them is far better for parents than where we are today.”

While the app may not be going forward at the moment, there is plenty of merit to creating a safe social platform space for younger audiences who, one way or another, will inevitably make their way online.

When you hear the words “middle school” and “social media,” cyberbullying is probably the first thought to your mind. Thanks to Instagram’s popularity among teens and it’s plethora of features, which include direct and group messaging, stories, tagging, posting, and multiple account creations, it has become a breeding ground for aggressive virtual assaults.

According to the Pew Research Center, 59% of teenagers have experienced at least one method of harassment online across all platforms of social media. These can include name-calling, negative rumors, and receiving unrequested explicit images.

Embed from Getty Images

Ditch the Label, a U.K. based anti-bullying charity, conducted a survey in 2017 that showed that out of the 78% of young users on Instagram, 42% experienced some form of cyberbullying. That was the highest bullying rate of all young users on any platform, beating out Facebook by 6%:

The Pew Research Center also found that 66% of teens felt social media platforms were not doing a good enough job of addressing online harassment. Facebook has stated their plans to continue enhancing safety on Instagram, implementing changes such as AI detection technology, restrictions, hidden words and the ability to make accounts private.

Facebook has also started using cross-checking technology in order to confirm user ages. Up until a couple years ago, Instagram had only required a new user to input their birth date in order to confirm they were 13 or older- something that was unbelievably easy for young tweens to lie about.

Despite Facebook’s continued safety measures, a recent Wall Street Journal report has revealed that the company is aware of the potential dangers their apps hold to their younger target audience, specifically to teen girls. However, the company has downplayed these concerns publicly.

This new information has led politicians to cast doubt on Facebook and Instagram’s ability to correctly adapt a system that prioritizes the safety of young users while also maintaining their key aspects that allow cyberbullying to consist.

Facebook Whistleblower To Testify In Front Of Senate Regarding Company’s Impact On Kids

Frances Haugen is a former Facebook product manager, who was recently identified as the Facebook whistleblower who released tens of thousands of pages of research and documents that indicate the company was more than aware of the various negative impacts its platforms have, particularly on young girls. 

Haugen worked on civic integrity issues within the company. Now, Haugen will be questioned by a Senate Commerce subcommittee about what Instagram, which is owned by Facebook, knew regarding its effects on young users and a multitude of other issues. 

Embed from Getty Images

“I believe what I did was right and necessary for the common good — but I know Facebook has infinite resources, which it could use to destroy me. I came forward because I recognized a frightening truth: almost no one outside of Facebook knows what happens inside Facebook.”

Haugen previously shared a series of documents with regulators at the Wall Street Journal, which published a multi-part investigation on Facebook, showing the platform was aware of the problems within its apps, including the negative effects of misinformation’s and the harm caused by Instagram on young users. 

“When we realized tobacco companies were hiding the harm it caused, the government took action. When we figured out cars were safer with seat belts, the government took action. And today, the government is taking action against companies that hid evidence on opioids. I implore you to do the same here. Facebook’s leadership won’t make the necessary changes because they have put their immense profits before people,” she explained. 

This is not the first time Facebook will be subject to Congressional hearings regarding its power and influence over its users. Haugen’s upcoming testimony will speak to the overall issue of social media platforms and the amount of power they have in regards to personal data and privacy practices. 

Embed from Getty Images

Haugen discussed how her goal isn’t to bring down Facebook, but to reform it from the toxic traits that continue to exist today. Around a month ago Haugen filed at least eight complaints to the Securities and Exchange Commission. The complaints alleged that the company is hiding research about its shortcomings from investors, and of course, the public. 

Democratic Senator Richard Blumenthal, who chairs the Senate Commerce subcommittee on consumer protection, released a statement this Sunday after Haugen’s appearance on “60 Minutes” where she identified herself as the whistleblower.

“From her [Haugen’s] first visit to my office, I have admired her backbone and bravery in revealing terrible truths about one of the world’s most powerful, implacable corporate giants. We now know about Facebook’s destructive harms to kids … because of documents Frances revealed.”

Following the Wall Street Journal’s investigative piece on Facebook, Antigone Davis, the company’s global head of safety, was questioned by members of the same Senate subcommittee, specifically in regards to Facebook’s impact on young users. Davis tried to downplay the idea that these reports are being seen as a “bombshell” by the public, and didn’t commit to releasing a fully detailed research report, to defend Facebook’s side of the argument, due to “privacy considerations.”

“Facebook’s actions make clear that we cannot trust it to police itself. We must consider stronger oversight, effective protections for children, and tools for parents, among the needed reforms,” Senator Blumenthal added.

Experts Worried About Rise Of ‘Zoom Dysmorphia’ As Pandemic Continues

Experts are worried that spending too much time staring at ourselves in “funhouse mirror” sizes on video-conferencing calls will begin to distort our self images.

Facebook Remains Under Fire For Continuously Spreading Covid-19 Vaccine Misinformation 

President Joe Biden called out tech giants and social media platforms like Facebook for failing to tackle the problem of misinformation being spread regarding the Covid-19 vaccine. The White House released a statement in which they claimed to have zeroed in on the “disinformation dozen,” which is in reference to 12 major social media accounts that have shown to be responsible for spreading a majority of the anti-vaccine misinformation online. 

“Facebook has repeatedly said it is going to take action, but in reality we have seen a piecemeal enforcement of its own community standards where some accounts are taken off Instagram but not Facebook and vice versa. There has been a systemic failure to address this,” said Imran Ahmed, the CEO of the Center for Countering Digital Hate (CCDH), the organization behind the “disinformation dozen” study.

Embed from Getty Images

The report identified 12 “superspreader accounts,” and a Facebook spokesperson claims the company has permanently banned all pages groups and accounts that “repeatedly break the rules on Covid misinformation including more than a dozen pages groups and accounts from these individuals.”

The CCDH confirmed that they have removed 35 accounts across multiple social media platforms so far. There are currently about 8.4 million followers spread across 62 active accounts that are still spreading anti-vaccine misinformation.

The main issue with these accounts is the amount of followers who believe that the information is real. Many of these accounts post false facts about the vaccine that claim its unsafe, ineffective, and not worth getting despite the overwhelming amount of evidence from a multitude of studies on these vaccines before they were distributed to the public. 

Embed from Getty Images

Jessica Gonzalez is the co-CEO at Free Pass, a media equity group, who recently spoke out about how a lot of these posts are prevalent on Spanish-language Facebook.

“Facebook needs a much better mechanism to stop the spread of false information about the vaccine, and they need to make sure they’re doing that across languages. It’s difficult to gauge the scope of the issue when Facebook doesn’t share figures.”

According to the social media watchdog Accountable Tech, “11 out of the top 15 vaccine related-posts on Facebook last week contained disinformation or were anti-vaccine.”

Vaccination rates in the US are currently plateauing as new cases continue to rise among unvaccinated individuals almost exclusively. 67% of Americans have received at least one vaccination and 58% are fully vaccinated. 

“Action needs to be taken regarding vaccine misinformation. Social media has greatly contributed to this misinformation – there’s no doubt. When we have a public health crisis and people are dying every day, enough is enough,” said Democratic Senator Amy Klobuchar.

US and Iran Conflict

Facebook Claims Hackers In Iran Used Platform To Target US Military Personnel 

Facebook announced last week that it had removed 200 accounts that they discovered were run by a group of hackers based in Iran as a part of a larger cyber-spying operation mainly targeting US military personnel and people working at defense and aerospace companies. 

The group is known as “Tortoiseshell” to security experts, and they all used fake online profiles to connect with individuals in the military, build personal connections and drive them to other sites where they would be tricked into clicking links that would infect their systems with spying malware. Some of the conversations between the hackers and personnel would go on for months to really establish that trust.

Embed from Getty Images

“This activity had the hallmarks of a well-resourced and persistent operation, while relying on relatively strong operational security measures to hide who’s behind it,” Facebook’s investigations team said in a blogpost.

“The group made fictitious profiles across multiple social media platforms to appear more credible, often posing as recruiters or employees of aerospace and defense companies”

Facebook’s team claimed that the group used email, messaging, and collaboration services to distribute the malware. A spokesperson for Microsoft, which was also involved in the cyberattack, claimed that they have been made aware of the hacking and would be taking extra measures to prevent something like this from happening in the future. 

Embed from Getty Images

“The hackers also used tailored domains to attract its targets, including fake recruiting websites for defense companies, and it set up online infrastructure that spoofed a legitimate job search website for the US Department of Labor.”

Facebook claimed the hackers mainly were targeting individuals in the US, and a few others in the UK and Europe in general. The campaign has been running since 2020, and has supposedly impacted around 200 individuals.

“The campaign appeared to show an expansion of the group’s activity, which had previously been reported to concentrate mostly on the IT and other industries in the Middle East. Our investigation found that a portion of the malware used by the group was developed by Mahak Rayan Afraz, an IT company based in Tehran with ties to the Islamic Revolutionary Guard Corps,” Facebook said. 

Facebook claimed that it has now blocked the malicious domains that it knows of from being shared, and Google is also taking steps to make sure all domains are blocked.

Instagram

Head Of Instagram Says App Is ‘No Longer For Sharing Photos’

Adam Mosseri, head of popular social media app Instagram, claimed recently that the platform is shifting its focus to compete more directly with TikTok. This means Instagram will begin prioritizing putting entertainment, videos, and shopping at the center of the apps experience. 

“We are no longer a photo-sharing app. The number one reason people say they use Instagram, based on research, is to be entertained.” 

Embed from Getty Images

He went on to explain how he recently “told the company that because of this data, Instagram will lean into the entertainment trend and video. TikTok and YouTube are huge competitors to Instagram, so in order to stay relevant, the app must evolve.”

“People are looking to Instagram to be entertained, and there’s stiff competition, and there is more to do and we have to embrace that, and that means change.”

Mosseri discussed how the app is currently experimenting with a change that involves showing users more recommended posts in their feeds that directly relate to the accounts that they already follow. 

Media reports on this shift stated that the changes would “make Instagram theoretically function similarly to how YouTube manages its home page.” TikTok has a similar function that shows users recommended videos and users based on the other posts that they’ve liked. 

Embed from Getty Images

Mosseri claims beyond just posts and users, however, Instagram will be working to make the recommendations more topical, so users can tell the app what kind of content they want to see more or less of. 

Mosseri says that “Instagram’s goals moving forward are to embrace video more broadly beyond its IGTV, Reels, and Stories integrations. Instagram wants to focus on more full-screen, immersive, mobile-first video experiences over the square photo-sharing app that it has been.”

The rise of recommended content has been growing exponentially among all social media platforms. Some users love it, and some hate it, which is why Apple recently implemented an update that lets users decide which apps can track what the user is doing. 

It’ll be interesting to see how much Instagram shifts to be more like TikTok or YouTube, and if that shift will help it gain more popularity, or cause a decrease in user engagement due to the fact that many people complain about missing when Instagram was just for uploading one photo at a time. 

Britney Spears Star

Britney Spears Speaks Out After Request To End Conservatorship 

Britney Spears spoke for the first time in front of a judge to request an end to her conservatorship. After the pop star made headlines with the shocking revelations of what her team and family have put her through, she took to social media to speak to her fans directly who have been so supportive and outspoken about the abuse Spears has faced over the past two decades. 

“I just want to tell you guys a little secret, I believe as people we all want the fairy tale life and by the way I’ve posted … my life seems to look and be pretty amazing … I think that’s what we all strive for !!!!” Spears posted on her Instagram.

Embed from Getty Images

“I’m bringing this to people’s attention because I don’t want people to think my life is perfect because IT’S DEFINITELY NOT AT ALL … and if you have read anything about me in the news this week … you obviously really know now it’s not !!!!”

“I apologize for pretending like I’ve been ok the past two years … I did it because of my pride and I was embarrassed to share what happened to me … but honestly who doesn’t want to capture their Instagram in a fun light !!!!” 

“Believe it or not, pretending that I’m ok has actually helped … so I decided to post this quote today because by golly if you’re going through hell … I feel like Instagram has helped me have a cool outlet to share my presence … existence … and to simply feel like I matter despite what I was going through and hey it worked … so I’ve decided to start reading more fairy tales,” Spears concluded her post.

While the Instagram caption seemed to make sense in terms of the week Spears’ has had, many of her fans who have led the #FREEBRITNEY movement have exposed multiple times that the singer is not in control of her own social media as a part of her conservatorship. 

Embed from Getty Images

This became especially apparent with the release of the “Framing Britney Spears” documentary and numerous press conferences her father and his attorney have held in which they claimed Britney was fine and liked the conservatorship. Many fans said if she really felt like that why has she never said anything herself? Why is it always through other people or a screen and never from her voice directly? 

Well, during the virtual hearing in a Los Angeles Superior Court on Wednesday, Spears finally was able to speak for herself, and she didn’t hold back with how much she despised her conservatorship.

“I want changes and I want changes going forward. I don’t want to be evaluated to determine if I’ve regained my mental capacity. I just want my life back. All I want is to own my money and for my boyfriend to be able to drive me in his car. I want to sue my family.”

Spears must now file a formal petition to end the conservatorship, something she always had the ability to do, but was never told she could. After the petition is filed an investigator will be appointed by the court to the case, and they will speak to everyone involved in the current arrangement. 

Future court proceedings may be sealed moving forward until an actual solution is met, however, one thing is now for sure, the world truly knows what Britney Spears has been enduring throughout her whole career, and a change will be made.

Miami Florida

Florida Governor DeSantis Targets Social Media Platforms In Newly Signed Bill

Florida Governor Ron DeSantis is targeting social media platforms ina newly signed bill that’s meant to monitor how social media platforms moderate online content. 

The legislation is one of the largest steps made by a Republican governor ever since allegations of online censorship were thrown at tech giants Facebook, Google, and Twitter. Tech industry leaders, however, claim that the legislation is unconstitutional and is setting everyone up for a massive court battle. 

On Monday, DeSantis claimed that a “council of censors in Silicon Valley are to blame for shutting down the debate over Covid-19 lockdowns and the origins of the coronavirus.”

Embed from Getty Images

“I would say those lockdowns have ruined millions of people’s lives all around this country. Wouldn’t it have been good to have a full debate on that in our public square? But that was not what Silicon Valley wanted to do.”

The bill that he signed specifically “prohibits tech platforms from suspending or banning political candidates in the state, with possible fines of $250,000 per day if the de-platformed candidate is seeking statewide office and $25,000 per day if the candidate is running for a non-statewide office,” according to CNN.

The legislation would also give Florida residents the power to sue tech companies for de-platforming. Similar bills have been in the works in states such as Arkansas, Kentucky, Oklahoma, and Utah. 

US lawmakers have been proposing significant changed to the federal law in its relation to the legal leeway that tech platforms have when it comes to online censorship. The federal law, Section 230 of the Communications Act of 1934, has been under fire by Democrats who argue that the platform’s benefit from a law that was created before the technology even existed. 

Embed from Getty Images

Tech industry leaders have repeatedly denied blocking or removing content solely based on political ideology. Many tech platforms began flagging posts that discussed the Covid-19 pandemic, but were spreading harmful misinformation. 

After multiple Republicans and former president Donald Trump continued to spread falsehoods and misinformation about the 2020 election and the sanctity of our Democracy, many political leaders began getting deplatformed for the harmful information they were spewing. 

Florida’s legislation will “force tech platforms to step back from moderating their sites due to the threat of litigation by any internet user, from foreign extremists to disgruntled internet trolls,” said the Computer and Communications Industry Association, a tech trade group.

“Florida taxpayers will also end up paying their share in the cost of enforcing new regulations, and for the inevitable legal challenges that will come along with the legislature’s effort to adopt a law with glaring constitutional challenges,” CCIA president Matt Schruers, wrote in an op-ed for the Orlando Sentinel.

“The First Amendment to the United States Constitution — backstopped by Section 230 — makes it abundantly clear that states have no power to compel private companies to host speech, especially from politicians,” said Oregon Democratic Sen. Ron Wyden, a co-author of Section 230, in a statement regarding the signing on the Florida bill.