Posts

ai

Facebook And Instagram To Start Labeling Digitally Altered Content ‘Made With AI,’ Meta Says

Meta, the owner of Facebook and Instagram, announced that they would be making major changes to its policies on digitally created and/or altered media. 

Meta will start adding “Made with AI” labels to posts that use artificial intelligence to create photos, videos, and audio published on Facebook and Instagram. The apps will begin adding this label in May. 

Vice president of content policy, Monika Bickert, stated in a blogpost that Meta would “apply separate and more prominent labels to digitally altered media that poses a particularly high risk of materially deceiving the public on a matter of importance, regardless of whether the content was created using AI or other tools,” according to the Guardian

A spokesperson also stated that Meta will begin applying more prominent high-risk labels immediately. 

Embed from Getty Images

This approach will overall shift the way Meta treats manipulated content. Instead of removing the content altogether, posts made to Facebook and Instagram will not provide viewers with the information about how the image was edited.

A company spokesperson said the “labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.”

In February, Meta’s oversight board said the company’s existing rules on manipulated media were “incoherent” after reviewing a video of President Joe Biden posted on Facebook last year that had been digitally altered to make it seem as though the president was acting inappropriately.

The board said the “policy should also apply to non-AI content, which is not necessarily any less misleading than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did,” according to the Guardian.

threads

Threads Reaches 100 Million Sign-Ups As Twitter’s Traffic Falls

In just five days, 100 million users have signed up for Twitter’s rival app, Threads. Meanwhile, Twitter’s user traffic has dropped as the platform continues to battle outages and controversies over its lax moderation policies.

The new platform’s rapid expansion has already outpaced that of ChatGPT, OpenAI’s viral chatbot, which had reached 10 million users in 40 days.

Due to Europe’s intricate regulatory systems, the app has not yet been released there. If it does launch there, it can potentially pose a serious threat to Twitter, which has 238 million daily active users.

Threads’s success can largely be traced to its integration with Meta’s Instagram service. New users can sign up using their already established Instagram handle.

In a post on the platform, Meta’s CEO, Mark Zuckerberg, shared his excitement for the speed of the app’s growth.

“Threads reached 100 million sign-ups over the weekend. That’s mostly organic demand and we haven’t even turned on many promotions yet. Can’t believe it’s only been 5 days!”

Embed from Getty Images

Similarweb, a data company specializing in web analytics, found that in the first two full days Threads was generally available, web traffic to Twitter was down 5 percent compared to the previous week. According to the company, Twitter has seen an 11% drop in website traffic compared to the same period in 2022.

A letter from Elon Musk’s longtime attorney Alex Spiro to Meta alleging “unlawful misappropriation” of trade secrets shows that Musk, Twitter’s owner, is already concerned about Threads.

The letter accuses Threads of hiring former Twitter employees to build a “copycat” platform using confidential information. In a tweet, Elon Musk acknowledged the letter, stating, “Competition is fine, cheating is not.”

Instagram head Adam Mosseri said in a Threads post that Meta’s purpose is not to replace Twitter but rather “to create a public square for communities on Instagram that never really embraced Twitter.”

Embed from Getty Images

“The goal isn’t to replace Twitter. The goal is to create a public square for communities on Instagram that never really embraced Twitter and for communities on Twitter (and other platforms) that are interested in a less angry place for conversations, but not all of Twitter. Politics and hard news are inevitably going to show up on Threads – they have on Instagram as well to some extent – but we’re not going to do anything to encourage those verticals.”

Messages posted on Threads will have a 500-character limit. Like on Twitter, users can reply to, repost and quote other user posts. The app has a similar aesthetic to Instagram and also allows users to share posts from Threads directly to their Instagram stories.

Accounts can be public or private, and verification on Instagram carries over to Threads. Mark Zuckerberg also called the app a “public space” in a Threads post after its launch.

“The vision for Threads is to create an option and friendly public space for conversation. We hope to take what Instagram does best and create a new experience around text, ideas, and discussing what’s on your mind.”

meta

Meta Announces They’re Prioritizing Advancing Artificial Intelligence As A Company 

Almost two years after Facebook rebranded as Meta and advertised giving the world a futuristic landscape through the metaverse, the company announced that now, their top investment priority is advancing artificial intelligence (AI). 

CEO Mark Zuckerberg sent out a letter to Meta staff on Tuesday, announcing plans to lay off 10,000 employees as a means of focusing on efficiency for the company; a move that was first announced last month in Meta’s quarterly earnings call. 

Embed from Getty Images

Zuckerberg now says Meta will “focus mostly on cutting costs and streamlining projects. Building the metaverse remains central to defining the future of social connection, Zuckerberg wrote.

“Our single largest investment is in advancing AI and building it into every one of our products.” 

He added information on how AI tools can help “users of its apps express themselves and discover new content, but also new AI tools can be used to increase efficiencies internally by helping engineers write better code faster.”

The CEO described last year as a “humbling wake-up call as the world economy changed, competitive pressures grew, and our growth slowed considerably.”

AI in general has been taking over the tech world, and Meta is no different, in fact, the company has been involved in AI research and development since it was called Facebook. 

Embed from Getty Images

“I do think it is a good thing to focus on AI,” Ali Mogharabi, a senior equity analyst at Morningstar, told CNN

“Meta’s investments in AI has benefits on both ends because it can improve efficiency for engineers creating products, and because incorporating AI features into Meta’s lineup of apps will potentially create more engagement time for users, which can then drive advertising revenue,” he explained.

“A lot of the investments in AI, and a lot of enhancements that come from those investments in AI, could actually be applicable to the entire metaverse project,” Mogharabi stated. 

Last year, Meta lost more than $13 billion from its “Reality Labs” unit, the business sector focused on developing and expanding the metaverse. This shift comes after multiple big investors expressed their concerns over the lack of growth that came from the sector. 

Angelo Zino, a senior equity analyst at CFRA Research, said “the second round of layoffs at Meta officially make us convinced that Mark Zuckerberg has completely switched gears, altering the narrative of the company to one focused on efficiencies rather than looking to grow the metaverse at any cost.”

meta

Meta To Launch Paid Subscription Services For Facebook And Instagram 

Mark Zuckerberg announced on Instagram this weekend that Meta is currently testing a subscription service in which users of Instagram and Facebook can pay to get verified; similar to Twitter’s recent launch of its paid Twitter Blue services where users can pay for verification. 

Meta will be releasing “Meta Verified” in Australia and New Zealand this week, in which users will have the option to pay either $11.99 a month for web service or $14.99 a month on iOS devices. 

Embed from Getty Images

The service will also include extra protection from impersonation accounts and direct access to customer support services. 

To avoid an increase in fake accounts, users who want the paid service will need to provide proof through a government ID which matches their profile name and picture; users must also be 18 to be eligible for the service. 

“This new feature is about increasing authenticity and security across our services.”

A Meta spokesperson also stated that there will be “no changes to accounts that are already verified,” as verification was previously given to users who are “authentic and notable.”

Embed from Getty Images

“We are evolving the meaning of the blue badge to focus on authenticity so we can expand verification access to more people. We will display the follower count in more places so people can distinguish which accounts are notable public figures among accounts that share the same name.”

Twitter recently launched its own version of this paid verification subscription service with Twitter Blue; launched in December. 

This move came from Twitter after an influx in fake “verified” accounts began to take over the platform. 

For Twitter, each checkmark is a different color to differentiate what type of account is verified: gold check marks for companies, gray for government entities and other government organizations, and blue for the average individual.

Twitter Blue currently costs $11 a month for iOS and Android users.

google

Texas Sues Google Over Facial Data Collection

The state of Texas is suing Google for illegally collecting Texans’ facial and voice recognition information without their consent, according to a statement issued by the state attorney general’s office on Thursday.

For over a decade, a Texas consumer protection law has barred companies from collecting data on Texans’ faces, voices or other biometric identifiers without receiving prior informed consent. Ken Paxton, the state’s attorney general, said Google violated this law by recording identifiers such as “a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

“In blatant defiance of that law, Google has, since at least 2015, collected biometric data from innumerable Texans and used their faces and their voices to serve Google’s commercial ends. Indeed, all across the state, everyday Texans have become unwitting cash cows being milked by Google for profits.”

The law imposes a $25,000 fine for every violation. According to reports, millions of users in Texas had their information stored. The complaint explicitly references the Google Photos app, Google’s Nest camera, and Google Assistant as means of collection.

Embed from Getty Images

A spokesman for Google, José Castañeda, accused Paxton of “mischaracterizing” products in “another breathless lawsuit.”

“For example, Google Photos helps you organize pictures of people by grouping similar faces, so you can easily find old photos. Of course, this is only visible to you, and you can easily turn off this feature if you choose and we do not use photos or videos in Google Photos for advertising purposes. The same is true for Voice Match and Face Match on Nest Hub Max, which are off-by-default features that give users the option to let Google Assistant recognize their voice or face to show their information. We will set the record straight in court.”

This lawsuit is the latest in a string of major cases brought against the company. Earlier this month, Arizona settled a privacy suit against Google for $85 million. Indiana, Washington and the District of Columbia also sued Google in January over privacy invasions related to location tracking.

In a much larger antitrust case, 36 states filed a lawsuit against Google in July over its control of the Android app store.

Paxton has gone after large technology corporations in the past for their privacy and monopolizing practices. In 2020, his office joined nine other states in filing an antitrust lawsuit against Google, which accused it of “working with Facebook Inc. in an unlawful manner that violated antitrust law to boost its already-dominant online advertising business.”

Embed from Getty Images

After the Jan. 6 insurrection, Paxton demanded Twitter, Amazon, Apple, Facebook and Google to be transparent about their content moderation procedures. This year, he also opened an investigation into Twitter over its reported percentage of fake accounts, saying that the company may be disingenuous about its numbers to inflate its value and raise its revenue.

In February, Paxton sued Meta for facial recognition software it provided users to help tag photos. The lawsuit is ongoing. However, Instagram is now required to ask for permission to analyze Texans’ facial features to properly use facial filters.

“Google’s indiscriminate collection of the personal information of Texans, including very sensitive information like biometric identifiers, will not be tolerated. I will continue to fight Big Tech to ensure the privacy and security of all Texans.”

In 2009, Texas revealed its privacy law, which covered biometric identifiers. Other states were implementing similar laws around the country during this same time. Texas was unique in that in the case of violations, the state of Texas would have to sue on behalf of the consumers.

mobile

GLAAD Report Shows Social Media Giants Aren’t Doing Enough To Protect LGBTQ Users

When it comes to protecting groups that are vulnerable to slurs and harassment, a new report shows major social media platforms are falling short.

According to advocate group GLAAD’s Social Media Safety Index (SMSI), which assesses and provides recommendations for the five major platforms (TikTok, Twitter, Instagram, Facebook, and Twitter), all platforms scored below 50% out of a possible 100%.

The SMSI grades platforms on 12 LGBTQ-specific factors, which include gender pronouns on user profiles, third-party advertisers, content moderator training, actions to restrict harmful content, and stopping the removal of or demonetizing legitimate LGBTQ content.

Embed from Getty Images

Coming in the highest was Instagram (48%), while TikTok came in last with 43%. Twitter scored the most zeros across the 12 categories with five. How LGBTQ members are received on social media plays a big role in the real world, GLAAD president and CEO Sarah Kate Ellis explained.

“This type of rhetoric and “content” that dehumanizes LGBTQ people has real-world impact. These malicious and false narratives, relentlessly perpetuated by right-wing media and politicians, continue to negatively impact public understanding of LGBTQ people — driving hatred, and violence, against our community,” Ellis said in a letter.

Ellis noted that the strategy of using misunderstanding and hate to help support legislation by politicians, which have proposed 325 anti-LGBTQ bills since the start of 2022, is something “we’ve seen across history.”

The SMSI grades line up with how users feel. A survey by GLAAD found that 84% of LGBTQ adults agree there aren’t enough protections on social media to prevent discrimination, harassment, or disinformation, while 40% of LGBTQ adults and 49% of transgender and nonbinary people don’t feel safe on social media.

The five platforms did excel in certain areas. Meta (the parent company of Facebook and Instagram) was just one of two that disclosed information on the training of content moderators while having a clear policy on prohibiting LGBTQ-offensive advertising.

GLAAD also highlighted TikTok and Twitter’s feature of preventing users from misgendering or deadnaming nonbinary and transgender people and recommended all platforms follow that innovative lead.

Embed from Getty Images

“This recommendation remains an especially high priority in our current landscape where anti-trans rhetoric and attacks are so prevalent, vicious, and harmful,” GLAAD’s senior director of social media safety, Jenni Olson, said.

However, those positives were overshadowed by a sea of negatives that ultimately resulted in failing grades. Most were docked for their policies’ limitations and enforcement, while GLAAD explained TikTok was lacking “adequate transparency” in several areas.

“The company currently does not disclose options for users to control the company’s collection of information related to their sexual orientation and gender identity,” the report said, recommending it should give users control over their own data and diversify their workforce.

“Notably, TikTok was the only company that did not disclose any information on steps it takes to diversify its workforce.”

Ellis called the companies’ performances “unacceptable.” “At this point, after their years of empty apologies and hollow promises, we must also confront the knowledge that social media platforms and companies are prioritizing profit over LGBTQ safety and lives.”

The safety of social media is particularly important when considering the vulnerable states of young LGBTQ users. According to The Trevor Project, 45% of LGBTQ youth seriously considered committing suicide in the last year, while 73% reported experiencing symptoms of anxiety.

Billionaire Businessman, Orlando Bravo, Claims The Metaverse Will Be Big, And Should Be Invested In 

Puerto Rican billionaire businessman Orlando Bravo, co-founder and managing partner of the private equity firm Thoma Bravo, claimed this week that the metaverse will be the “big word of 2021, and is a big time investment.” 

“The metaverse is very investable, and it’s going to be very big.” 

Embed from Getty Images

Much like the movie “Ready Player One,” the metaverse is a sci-fi concept where humans put on some sort of virtual reality gear that allows them to live, work, and play in a virtual world. The concept has been viewed as a utopian dream, and a dystopian nightmare, depending on your standpoint. 

Facebook’s co-founder Mark Zuckerberg announced his company’s plans for the metaverse last month. Zuckerberg also recently changed the name of Facebook to Meta, claiming the new company will have a major focus on the metaverse. 

“The metaverse is the next frontier just like social networking was when we got started.”

Embed from Getty Images

The entire concept of the metaverse has been heavily debated online. One marketing campaign in Iceland even went as far as to mock the metaverse announcement video as a means of bringing in tourists. In the video, a Zuckerberg lookalike introduces viewers to “Icelandverse, a place of enhanced actual reality without the silly-looking headsets.” 

Beyond Facebook, tech giants like Microsoft, Roblox, and Nvidia are already trying to enhance their software so that it’s compatible with the metaverse, and can even be used to power it if needed. 

Thomas Bravo alone has more than $83 billion in assets under management and a portfolio that contains more than 40 software companies. Beyond his excitement for the metaverse, Bravo also discussed his passion for cryptocurrency and bitcoin. 

“How could you not love crypto? Crypto is just a great system. It’s frictionless. It’s decentralized. And young people want their own financial system. So it is here to stay,” Bravo said.

According To Pearson/NORC Poll, Most Americans Think Misinformation Is A Problem

According to the results of a poll released by the Pearson Institution and Associated Press-NORC, 95% of Americans believe that misinformation regarding current events and issues to is a problem, with 81% saying it’s a major problem.

Additionally, 91% say that social media companies are responsible for the spread of misinformation, with 93% saying the same of social media users. More Americans said that they blame social media users, social media companies, and U.S. politicians for misinformation spreading more than the U.S. Government or other foreign governments. However, older adults are more likely to blame foreign countries than younger adults.

Embed from Getty Images

41% are worried they have been exposed to misinformation, but just 20% are worried they have spread it themselves. The poll, which involved 1,071 adults, found that younger adults are more likely to worry about possibly having spread misinformation more than older adults.

Lastly, most Americans felt that social media companies and users, the U.S. government, and U.S. politicians all share responsibility for dealing with the spread of misinformation.

The results of this poll shouldn’t be too surprising, as the threat and spreading of misinformation has grown exponentially during the rise of social media in the past decade.

In addition, major events have been at the center point of misinformation, such as elections, natural disasters, and the COVID-19 pandemic. Many people have had their opinions on the virus and vaccines effected due to the fake news that is swirling around them, which shows us that something as simple as a lie or exaggeration in an article can have massive, negative impacts.

Social media platforms have made attempts in the past to combat misinformation. Back in 2017, Facebook discussed some of the steps it was taking to limit this matter, such as updating fake account detection, identifying fake news while fact-checking organizations, and making it harder for parties guilty of misinformation spreading to buy ads. Facebook also assured users of easier reporting of fake news and improved news feed rankings.

Embed from Getty Images

Those improvements clearly haven’t done much, if anything at all. In 2020, Forbes reported on a study that found that Facebook was the leading social media site to refer to fake news over 15% of the time, while referring to hard news just 6%. It wasn’t a close margin between social media sites, either. Google came in with 3.3% untrustworthy versus 6.2% hard news, while Twitter had 1% untrustworthy versus 1.5% hard news.

Speaking to 60 Minutes, Facebook whistleblower Frances Haugen explained how the tech giant prioritized what content users would see on their news feeds, which helped led to the spread of misinformation that targeted fierce reactions.

“And one of the consequences of how Facebook is picking out that content today is it is — optimizing for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions.”

If you are worried about biting the bait on or spreading around misinformation, there are plenty of ways to train yourself to have a more keen eye. According to The Verge, looking at factors such as survey and infographic sources, quotes, names and keywords, and the time-sensitivity of an article can all help you in concluding whether or not there may be misinformation afoot.

You should also take the time to consider other details, such as who is providing the information and how the story is being presented by different media sources. The Verge also urges for readers to think about their own feelings— are you getting strong emotions from reading the article? Do you want to instantly share it? If articles are feeding into reactions more than emphasizing actual facts or information, then that could be a red flag.

Facebook Postpones “Instagram For Kids”

Following sharp backlash from parents, users, and lawmakers, Facebook has announced that it is pausing their latest venture: “Instagram Kids,” a spin-off of the photo-sharing app that would target tweens between the ages of 10-12.

In a statement published on their blog, Facebook explained that while the need to continue building their project remains, they will be working with those who were most vocal about Facebook’s planned platform:

“While we stand by the need to develop this experience, we’ve decided to pause this project. This will give us time to work with parents, experts, policymakers and regulators, to listen to their concerns, and to demonstrate the value and importance of this project for younger teens online today.”

Embed from Getty Images

The app had been in development since March and was set to be led by the head of Instagram Adam Mosseri and Facebook vice president Pavni Diwanji. Diwanji had previously been influential in Google’s launch of Youtube Kids back in 2015.

However, the titan of industry, which acquired Instagram in 2012, did not back down from the vast amount of criticism and admit failure. Instead, they defended their attempts at targeting a group that some might argue are the most vulnerable to the dangers and pressures of the online world:

“Critics of “Instagram Kids” will see this as an acknowledgement that the project is a bad idea. That’s not the case. The reality is that kids are already online, and we believe that developing age-appropriate experiences designed specifically for them is far better for parents than where we are today.”

While the app may not be going forward at the moment, there is plenty of merit to creating a safe social platform space for younger audiences who, one way or another, will inevitably make their way online.

When you hear the words “middle school” and “social media,” cyberbullying is probably the first thought to your mind. Thanks to Instagram’s popularity among teens and it’s plethora of features, which include direct and group messaging, stories, tagging, posting, and multiple account creations, it has become a breeding ground for aggressive virtual assaults.

According to the Pew Research Center, 59% of teenagers have experienced at least one method of harassment online across all platforms of social media. These can include name-calling, negative rumors, and receiving unrequested explicit images.

Embed from Getty Images

Ditch the Label, a U.K. based anti-bullying charity, conducted a survey in 2017 that showed that out of the 78% of young users on Instagram, 42% experienced some form of cyberbullying. That was the highest bullying rate of all young users on any platform, beating out Facebook by 6%:

The Pew Research Center also found that 66% of teens felt social media platforms were not doing a good enough job of addressing online harassment. Facebook has stated their plans to continue enhancing safety on Instagram, implementing changes such as AI detection technology, restrictions, hidden words and the ability to make accounts private.

Facebook has also started using cross-checking technology in order to confirm user ages. Up until a couple years ago, Instagram had only required a new user to input their birth date in order to confirm they were 13 or older- something that was unbelievably easy for young tweens to lie about.

Despite Facebook’s continued safety measures, a recent Wall Street Journal report has revealed that the company is aware of the potential dangers their apps hold to their younger target audience, specifically to teen girls. However, the company has downplayed these concerns publicly.

This new information has led politicians to cast doubt on Facebook and Instagram’s ability to correctly adapt a system that prioritizes the safety of young users while also maintaining their key aspects that allow cyberbullying to consist.

Facebook Whistleblower To Testify In Front Of Senate Regarding Company’s Impact On Kids

Frances Haugen is a former Facebook product manager, who was recently identified as the Facebook whistleblower who released tens of thousands of pages of research and documents that indicate the company was more than aware of the various negative impacts its platforms have, particularly on young girls. 

Haugen worked on civic integrity issues within the company. Now, Haugen will be questioned by a Senate Commerce subcommittee about what Instagram, which is owned by Facebook, knew regarding its effects on young users and a multitude of other issues. 

Embed from Getty Images

“I believe what I did was right and necessary for the common good — but I know Facebook has infinite resources, which it could use to destroy me. I came forward because I recognized a frightening truth: almost no one outside of Facebook knows what happens inside Facebook.”

Haugen previously shared a series of documents with regulators at the Wall Street Journal, which published a multi-part investigation on Facebook, showing the platform was aware of the problems within its apps, including the negative effects of misinformation’s and the harm caused by Instagram on young users. 

“When we realized tobacco companies were hiding the harm it caused, the government took action. When we figured out cars were safer with seat belts, the government took action. And today, the government is taking action against companies that hid evidence on opioids. I implore you to do the same here. Facebook’s leadership won’t make the necessary changes because they have put their immense profits before people,” she explained. 

This is not the first time Facebook will be subject to Congressional hearings regarding its power and influence over its users. Haugen’s upcoming testimony will speak to the overall issue of social media platforms and the amount of power they have in regards to personal data and privacy practices. 

Embed from Getty Images

Haugen discussed how her goal isn’t to bring down Facebook, but to reform it from the toxic traits that continue to exist today. Around a month ago Haugen filed at least eight complaints to the Securities and Exchange Commission. The complaints alleged that the company is hiding research about its shortcomings from investors, and of course, the public. 

Democratic Senator Richard Blumenthal, who chairs the Senate Commerce subcommittee on consumer protection, released a statement this Sunday after Haugen’s appearance on “60 Minutes” where she identified herself as the whistleblower.

“From her [Haugen’s] first visit to my office, I have admired her backbone and bravery in revealing terrible truths about one of the world’s most powerful, implacable corporate giants. We now know about Facebook’s destructive harms to kids … because of documents Frances revealed.”

Following the Wall Street Journal’s investigative piece on Facebook, Antigone Davis, the company’s global head of safety, was questioned by members of the same Senate subcommittee, specifically in regards to Facebook’s impact on young users. Davis tried to downplay the idea that these reports are being seen as a “bombshell” by the public, and didn’t commit to releasing a fully detailed research report, to defend Facebook’s side of the argument, due to “privacy considerations.”

“Facebook’s actions make clear that we cannot trust it to police itself. We must consider stronger oversight, effective protections for children, and tools for parents, among the needed reforms,” Senator Blumenthal added.