Posts

Billionaire Businessman, Orlando Bravo, Claims The Metaverse Will Be Big, And Should Be Invested In 

Puerto Rican billionaire businessman Orlando Bravo, co-founder and managing partner of the private equity firm Thoma Bravo, claimed this week that the metaverse will be the “big word of 2021, and is a big time investment.” 

“The metaverse is very investable, and it’s going to be very big.” 

Embed from Getty Images

Much like the movie “Ready Player One,” the metaverse is a sci-fi concept where humans put on some sort of virtual reality gear that allows them to live, work, and play in a virtual world. The concept has been viewed as a utopian dream, and a dystopian nightmare, depending on your standpoint. 

Facebook’s co-founder Mark Zuckerberg announced his company’s plans for the metaverse last month. Zuckerberg also recently changed the name of Facebook to Meta, claiming the new company will have a major focus on the metaverse. 

“The metaverse is the next frontier just like social networking was when we got started.”

Embed from Getty Images

The entire concept of the metaverse has been heavily debated online. One marketing campaign in Iceland even went as far as to mock the metaverse announcement video as a means of bringing in tourists. In the video, a Zuckerberg lookalike introduces viewers to “Icelandverse, a place of enhanced actual reality without the silly-looking headsets.” 

Beyond Facebook, tech giants like Microsoft, Roblox, and Nvidia are already trying to enhance their software so that it’s compatible with the metaverse, and can even be used to power it if needed. 

Thomas Bravo alone has more than $83 billion in assets under management and a portfolio that contains more than 40 software companies. Beyond his excitement for the metaverse, Bravo also discussed his passion for cryptocurrency and bitcoin. 

“How could you not love crypto? Crypto is just a great system. It’s frictionless. It’s decentralized. And young people want their own financial system. So it is here to stay,” Bravo said.

According To Pearson/NORC Poll, Most Americans Think Misinformation Is A Problem

According to the results of a poll released by the Pearson Institution and Associated Press-NORC, 95% of Americans believe that misinformation regarding current events and issues to is a problem, with 81% saying it’s a major problem.

Additionally, 91% say that social media companies are responsible for the spread of misinformation, with 93% saying the same of social media users. More Americans said that they blame social media users, social media companies, and U.S. politicians for misinformation spreading more than the U.S. Government or other foreign governments. However, older adults are more likely to blame foreign countries than younger adults.

Embed from Getty Images

41% are worried they have been exposed to misinformation, but just 20% are worried they have spread it themselves. The poll, which involved 1,071 adults, found that younger adults are more likely to worry about possibly having spread misinformation more than older adults.

Lastly, most Americans felt that social media companies and users, the U.S. government, and U.S. politicians all share responsibility for dealing with the spread of misinformation.

The results of this poll shouldn’t be too surprising, as the threat and spreading of misinformation has grown exponentially during the rise of social media in the past decade.

In addition, major events have been at the center point of misinformation, such as elections, natural disasters, and the COVID-19 pandemic. Many people have had their opinions on the virus and vaccines effected due to the fake news that is swirling around them, which shows us that something as simple as a lie or exaggeration in an article can have massive, negative impacts.

Social media platforms have made attempts in the past to combat misinformation. Back in 2017, Facebook discussed some of the steps it was taking to limit this matter, such as updating fake account detection, identifying fake news while fact-checking organizations, and making it harder for parties guilty of misinformation spreading to buy ads. Facebook also assured users of easier reporting of fake news and improved news feed rankings.

Embed from Getty Images

Those improvements clearly haven’t done much, if anything at all. In 2020, Forbes reported on a study that found that Facebook was the leading social media site to refer to fake news over 15% of the time, while referring to hard news just 6%. It wasn’t a close margin between social media sites, either. Google came in with 3.3% untrustworthy versus 6.2% hard news, while Twitter had 1% untrustworthy versus 1.5% hard news.

Speaking to 60 Minutes, Facebook whistleblower Frances Haugen explained how the tech giant prioritized what content users would see on their news feeds, which helped led to the spread of misinformation that targeted fierce reactions.

“And one of the consequences of how Facebook is picking out that content today is it is — optimizing for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions.”

If you are worried about biting the bait on or spreading around misinformation, there are plenty of ways to train yourself to have a more keen eye. According to The Verge, looking at factors such as survey and infographic sources, quotes, names and keywords, and the time-sensitivity of an article can all help you in concluding whether or not there may be misinformation afoot.

You should also take the time to consider other details, such as who is providing the information and how the story is being presented by different media sources. The Verge also urges for readers to think about their own feelings— are you getting strong emotions from reading the article? Do you want to instantly share it? If articles are feeding into reactions more than emphasizing actual facts or information, then that could be a red flag.

Facebook Postpones “Instagram For Kids”

Following sharp backlash from parents, users, and lawmakers, Facebook has announced that it is pausing their latest venture: “Instagram Kids,” a spin-off of the photo-sharing app that would target tweens between the ages of 10-12.

In a statement published on their blog, Facebook explained that while the need to continue building their project remains, they will be working with those who were most vocal about Facebook’s planned platform:

“While we stand by the need to develop this experience, we’ve decided to pause this project. This will give us time to work with parents, experts, policymakers and regulators, to listen to their concerns, and to demonstrate the value and importance of this project for younger teens online today.”

Embed from Getty Images

The app had been in development since March and was set to be led by the head of Instagram Adam Mosseri and Facebook vice president Pavni Diwanji. Diwanji had previously been influential in Google’s launch of Youtube Kids back in 2015.

However, the titan of industry, which acquired Instagram in 2012, did not back down from the vast amount of criticism and admit failure. Instead, they defended their attempts at targeting a group that some might argue are the most vulnerable to the dangers and pressures of the online world:

“Critics of “Instagram Kids” will see this as an acknowledgement that the project is a bad idea. That’s not the case. The reality is that kids are already online, and we believe that developing age-appropriate experiences designed specifically for them is far better for parents than where we are today.”

While the app may not be going forward at the moment, there is plenty of merit to creating a safe social platform space for younger audiences who, one way or another, will inevitably make their way online.

When you hear the words “middle school” and “social media,” cyberbullying is probably the first thought to your mind. Thanks to Instagram’s popularity among teens and it’s plethora of features, which include direct and group messaging, stories, tagging, posting, and multiple account creations, it has become a breeding ground for aggressive virtual assaults.

According to the Pew Research Center, 59% of teenagers have experienced at least one method of harassment online across all platforms of social media. These can include name-calling, negative rumors, and receiving unrequested explicit images.

Embed from Getty Images

Ditch the Label, a U.K. based anti-bullying charity, conducted a survey in 2017 that showed that out of the 78% of young users on Instagram, 42% experienced some form of cyberbullying. That was the highest bullying rate of all young users on any platform, beating out Facebook by 6%:

The Pew Research Center also found that 66% of teens felt social media platforms were not doing a good enough job of addressing online harassment. Facebook has stated their plans to continue enhancing safety on Instagram, implementing changes such as AI detection technology, restrictions, hidden words and the ability to make accounts private.

Facebook has also started using cross-checking technology in order to confirm user ages. Up until a couple years ago, Instagram had only required a new user to input their birth date in order to confirm they were 13 or older- something that was unbelievably easy for young tweens to lie about.

Despite Facebook’s continued safety measures, a recent Wall Street Journal report has revealed that the company is aware of the potential dangers their apps hold to their younger target audience, specifically to teen girls. However, the company has downplayed these concerns publicly.

This new information has led politicians to cast doubt on Facebook and Instagram’s ability to correctly adapt a system that prioritizes the safety of young users while also maintaining their key aspects that allow cyberbullying to consist.

Facebook Whistleblower To Testify In Front Of Senate Regarding Company’s Impact On Kids

Frances Haugen is a former Facebook product manager, who was recently identified as the Facebook whistleblower who released tens of thousands of pages of research and documents that indicate the company was more than aware of the various negative impacts its platforms have, particularly on young girls. 

Haugen worked on civic integrity issues within the company. Now, Haugen will be questioned by a Senate Commerce subcommittee about what Instagram, which is owned by Facebook, knew regarding its effects on young users and a multitude of other issues. 

Embed from Getty Images

“I believe what I did was right and necessary for the common good — but I know Facebook has infinite resources, which it could use to destroy me. I came forward because I recognized a frightening truth: almost no one outside of Facebook knows what happens inside Facebook.”

Haugen previously shared a series of documents with regulators at the Wall Street Journal, which published a multi-part investigation on Facebook, showing the platform was aware of the problems within its apps, including the negative effects of misinformation’s and the harm caused by Instagram on young users. 

“When we realized tobacco companies were hiding the harm it caused, the government took action. When we figured out cars were safer with seat belts, the government took action. And today, the government is taking action against companies that hid evidence on opioids. I implore you to do the same here. Facebook’s leadership won’t make the necessary changes because they have put their immense profits before people,” she explained. 

This is not the first time Facebook will be subject to Congressional hearings regarding its power and influence over its users. Haugen’s upcoming testimony will speak to the overall issue of social media platforms and the amount of power they have in regards to personal data and privacy practices. 

Embed from Getty Images

Haugen discussed how her goal isn’t to bring down Facebook, but to reform it from the toxic traits that continue to exist today. Around a month ago Haugen filed at least eight complaints to the Securities and Exchange Commission. The complaints alleged that the company is hiding research about its shortcomings from investors, and of course, the public. 

Democratic Senator Richard Blumenthal, who chairs the Senate Commerce subcommittee on consumer protection, released a statement this Sunday after Haugen’s appearance on “60 Minutes” where she identified herself as the whistleblower.

“From her [Haugen’s] first visit to my office, I have admired her backbone and bravery in revealing terrible truths about one of the world’s most powerful, implacable corporate giants. We now know about Facebook’s destructive harms to kids … because of documents Frances revealed.”

Following the Wall Street Journal’s investigative piece on Facebook, Antigone Davis, the company’s global head of safety, was questioned by members of the same Senate subcommittee, specifically in regards to Facebook’s impact on young users. Davis tried to downplay the idea that these reports are being seen as a “bombshell” by the public, and didn’t commit to releasing a fully detailed research report, to defend Facebook’s side of the argument, due to “privacy considerations.”

“Facebook’s actions make clear that we cannot trust it to police itself. We must consider stronger oversight, effective protections for children, and tools for parents, among the needed reforms,” Senator Blumenthal added.

Facebook Remains Under Fire For Continuously Spreading Covid-19 Vaccine Misinformation 

President Joe Biden called out tech giants and social media platforms like Facebook for failing to tackle the problem of misinformation being spread regarding the Covid-19 vaccine. The White House released a statement in which they claimed to have zeroed in on the “disinformation dozen,” which is in reference to 12 major social media accounts that have shown to be responsible for spreading a majority of the anti-vaccine misinformation online. 

“Facebook has repeatedly said it is going to take action, but in reality we have seen a piecemeal enforcement of its own community standards where some accounts are taken off Instagram but not Facebook and vice versa. There has been a systemic failure to address this,” said Imran Ahmed, the CEO of the Center for Countering Digital Hate (CCDH), the organization behind the “disinformation dozen” study.

Embed from Getty Images

The report identified 12 “superspreader accounts,” and a Facebook spokesperson claims the company has permanently banned all pages groups and accounts that “repeatedly break the rules on Covid misinformation including more than a dozen pages groups and accounts from these individuals.”

The CCDH confirmed that they have removed 35 accounts across multiple social media platforms so far. There are currently about 8.4 million followers spread across 62 active accounts that are still spreading anti-vaccine misinformation.

The main issue with these accounts is the amount of followers who believe that the information is real. Many of these accounts post false facts about the vaccine that claim its unsafe, ineffective, and not worth getting despite the overwhelming amount of evidence from a multitude of studies on these vaccines before they were distributed to the public. 

Embed from Getty Images

Jessica Gonzalez is the co-CEO at Free Pass, a media equity group, who recently spoke out about how a lot of these posts are prevalent on Spanish-language Facebook.

“Facebook needs a much better mechanism to stop the spread of false information about the vaccine, and they need to make sure they’re doing that across languages. It’s difficult to gauge the scope of the issue when Facebook doesn’t share figures.”

According to the social media watchdog Accountable Tech, “11 out of the top 15 vaccine related-posts on Facebook last week contained disinformation or were anti-vaccine.”

Vaccination rates in the US are currently plateauing as new cases continue to rise among unvaccinated individuals almost exclusively. 67% of Americans have received at least one vaccination and 58% are fully vaccinated. 

“Action needs to be taken regarding vaccine misinformation. Social media has greatly contributed to this misinformation – there’s no doubt. When we have a public health crisis and people are dying every day, enough is enough,” said Democratic Senator Amy Klobuchar.

US and Iran Conflict

Facebook Claims Hackers In Iran Used Platform To Target US Military Personnel 

Facebook announced last week that it had removed 200 accounts that they discovered were run by a group of hackers based in Iran as a part of a larger cyber-spying operation mainly targeting US military personnel and people working at defense and aerospace companies. 

The group is known as “Tortoiseshell” to security experts, and they all used fake online profiles to connect with individuals in the military, build personal connections and drive them to other sites where they would be tricked into clicking links that would infect their systems with spying malware. Some of the conversations between the hackers and personnel would go on for months to really establish that trust.

Embed from Getty Images

“This activity had the hallmarks of a well-resourced and persistent operation, while relying on relatively strong operational security measures to hide who’s behind it,” Facebook’s investigations team said in a blogpost.

“The group made fictitious profiles across multiple social media platforms to appear more credible, often posing as recruiters or employees of aerospace and defense companies”

Facebook’s team claimed that the group used email, messaging, and collaboration services to distribute the malware. A spokesperson for Microsoft, which was also involved in the cyberattack, claimed that they have been made aware of the hacking and would be taking extra measures to prevent something like this from happening in the future. 

Embed from Getty Images

“The hackers also used tailored domains to attract its targets, including fake recruiting websites for defense companies, and it set up online infrastructure that spoofed a legitimate job search website for the US Department of Labor.”

Facebook claimed the hackers mainly were targeting individuals in the US, and a few others in the UK and Europe in general. The campaign has been running since 2020, and has supposedly impacted around 200 individuals.

“The campaign appeared to show an expansion of the group’s activity, which had previously been reported to concentrate mostly on the IT and other industries in the Middle East. Our investigation found that a portion of the malware used by the group was developed by Mahak Rayan Afraz, an IT company based in Tehran with ties to the Islamic Revolutionary Guard Corps,” Facebook said. 

Facebook claimed that it has now blocked the malicious domains that it knows of from being shared, and Google is also taking steps to make sure all domains are blocked.

Facebook Is Entering Into The World Of Real Estate 

Facebook is currently planning to develop a community near its headquarters in Menlo Park, California. The property is set to have a supermarket, restaurants, shops, and a 193-room hotel. 

The company town will be known as Willow Village, and will contain over 1,700 apartments on site, including 320 more affordable units and 120 that will be set aside specifically for senior citizens. 

Embed from Getty Images

Willow Village is being developed on a 59-acre site which currently stands as an industrial and research complex. Facebook is collaborating with Signature Development Group to create the space; the group is a Bay Area real estate developer known for creating spaces that combine commercial and residential spaces. 

The design for Willow Village is projected to be very community oriented and pedestrian friendly. It will have numerous bike trails, sidewalk space, and numerous public park spaces; including a quarter-mile elevated park meant to emulate the High Line in Manhattan, NYC.

The development will also contain a 1.25-million-square-foot office building that will include a massive glass-dome area known as the “collaboration area.” 

Facebook initially filed paperwork to redevelop the 59-acre site back in 2017, but were met with major resistance from residents in nearby neighborhoods who were worried about the traffic and housing prices that would be impacted. 

Embed from Getty Images

In order to accommodate, Facebook created a blueprint that made Willow Village have 30% less office space to make room for 200 more apartments. It also agreed to prioritize construction of grocery stores and other retail options that any citizen can use, not just employees. 

“We’re deeply committed to being a good neighbor in Menlo Park. We listened to a wide range of feedback and the updated plan directly responds to community input,” said John Tenanes, Facebook’s VP of real estate.

Willow Village will not just be for Facebook employees. The City of Menlo Park is still currently reviewing Facebook’s proposal that would allow for prime residential access to the spaces in Willow Village, but it’s expected that the proposal will be approved in the coming weeks. 

The goal is to have as many Facebook employees as possible living in the village to allow for optimal business. The public aspect will also help the social media giant further grow because they now will have direct access to the individuals who use the platform every day. 

Facebook Scientists Can Now Tell Where Deepfakes Come From 

Artificial intelligence workers at Facebook have developed a new software that can reveal when a picture or video post is a deepfake as well as where it came from. 

Deepfakes are defined as videos that have been digitally altered in some way using AI technology. Typically, these videos show very hyper-realistic celebrity faces, saying whatever the user making the post wants them to say. These videos have become increasingly realistic, and popular, making it extremely hard for humans to tell what’s real, and what’s not. 

Embed from Getty Images

The Facebook researchers claim their new AI software can establish if a piece of media is a deepfake or not based on a single image taken from said video. The software will also be able to identify the AI that was used to create the video, no matter how advanced the technique. 

Tal Hassner, an applied research lead at Facebook, said that it’s “possible to train AI software to look at the photo and tell you with a reasonable degree of accuracy what is the design of the AI model that generated that photo.”

Deepfakes in general are a major threat to internet safety, in fact, Facebook banned them back in January 2020 due to the amount of misinformation they were spreading. Individuals can easily create doctored videos of powerful politicians making wild claims about the US that other world leaders could potentially see and take seriously before it’s determined that the video is indeed fake. 

Hassner said that detecting deepfakes is a “cat and mouse game, they’re becoming easier to produce and harder to detect. One of the main applications of deepfakes so far has been in pornography where a person’s face is swapped onto someone else’s body, but they’ve also been used to make celebrities appear as though they’re doing or saying something they’re not.”

Embed from Getty Images

Nina Schick is a deepfake expert who’s worked closely with the White House and President Biden on this issue. She emphasized that while it’s amazing that we now have the technology to detect when these videos are fake, it’s just as important to find out how well they actually work in the real world and how well they’re able to track and stop individuals from continuing to make them. 

“It’s all well and good testing it on a set of training data in a controlled environment. But one of the big challenges seems to be that there are easy ways to fool detection models, like by compressing an image or a video.”

It’s still unclear how or even if Facebook will be using this technology to combat the amount of misinformation deepfakes work to spread on the platform, but Tassner explained that ideally the technology will be used among all in the future. 

“If someone wanted to abuse them (generative models) and conduct a coordinated attack by uploading things from different sources, we can actually spot that just by saying all of these came from the same mold we’ve never seen before but it has these specific properties, specific attributes,” he said.

Facebook Phone App

Facebook’s Ban On Donald Trump Will Continue To Hold 

Facebook’s oversight board ruled this Wednesday that its suspension of former President Donald Trump was justified following his role in the January 6th insurrection attack on the Capitol building. 

The panel claimed that this means the company doesn’t need to reinstate Trump’s access to Facebook and Instagram, however, they also mentioned that the company was wrong to impose an indefinite ban, and the platform has six months to either restore Trump’s account, make his suspension permanent, or suspend him for a specific period of time. 

Embed from Getty Images

Facebook joined a multitude of other social media platforms that banned Trump in January after a mob of his supporters stormed the Capitol Building. Trump used his accounts to “incite violent insurrection” as ruled by Facebook, Twitter, Instagram, and a handful of other platforms. 

“In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.”

Vice President of Global Affairs and Communications Nick Clegg claimed that “Facebook will now determine an action that is clear and proportionate following the ruling. Until then, Trump’s accounts will remain suspended.”

Embed from Getty Images

The board’s ruling could also set a potential precedent for how social media platforms treat posts from political leaders. The decision to ban Trump has led to a major debate over the power these tech companies have, but also the power that our political leaders have when it comes to the things they say and the influence they have; especially when it comes to violent attacks on our government. 

Many have argued that Facebook’s ban on Trump has been long overdue, as his posts have often started conversations that led to multiple violations of the platform’s hate speech policies, however, because of his political power, those violations were rarely regulated. 

Many researchers also emphasized the fact that Trump’s constant efforts to undermine the 2020 election and constant baseless claims against our democracy and Biden’s win created a social media environment fueled by violent political rage. 

The former president, however, has previously teased that regardless of what the platforms decide, he won;t be returning to them, and will potentially start his own social media platform to communicate with his supporters; essentially a personal blog. 

Congress Questions Tech CEOs Over Role In Capitol Riot

Sundar Pichai of Google, Mark Zuckerberg of Facebook, and Jack Dorsey of Twitter all testified before two committees of the House of Representatives on “social media’s role in promoting extremism and the rampant spreading of misinformation” regarding the pandemic, Covid-19 vaccine, and election process.