meta

EU Launching Formal Investigation Into Meta Regarding Election Misinformation Before June Polls Open 

The European Union (EU) is set to launch a formal investigation into Meta, the parent company of Facebook and Instagram. The investigation was prompted over the EU’s concerns that the tech giant isn’t doing enough to counter Russian disinformation ahead of the EU elections in June, according to reports

The EU is also likely to express their concerns regarding the lack of effective monitoring of election content, and the inadequate tools they use for flagging illegal content. 

Embed from Getty Images

Lisa O’Carroll, a correspondent for The Guardian, wrote that the European Commission is worried about Meta’s moderation system, claiming that it is not extensive enough to combat the presence of misinformation, and even suppresses voting. 

The Financial Times revealed that government officials are worried about how Meta is handling Russia’s specific efforts to undermine the upcoming elections. 

Meta’s plan to discontinue its CrowdTangle tool also has officials concerned. CrowdTangle is a public insights tool that allows researchers, journalists, and others within the EU to monitor in real time the spread of misinformation and any attempts to suppress voting. 

The EU currently has new laws in place that require tech companies to regulate their content and have systems in place to guard against any and all systemic risks involving election interference. 

Embed from Getty Images

“We have a well-established process for identifying and mitigating risks on our platforms. We look forward to continuing our cooperations with the European Commission and providing them with further details of this work,” a spokesperson for Meta stated

The commission recently carried out “stress tests” on all the major social media platforms as a means of determining if there were proper safeguards in place to prevent the spreading of misinformation. The tests involved a series of made-up scenarios that are based on past attempts at election manipulation, such as using deep fakes and speech suppression.

“The aim was to test platforms’ readiness to address manipulative behavior that could occur in the run-up to the elections, in particular the different manipulative tactics, techniques and procedures,” the commission stated.

This past Monday, parliament released official tips for voters in the upcoming elections, which will be taking place between June 6th and 9th. They cited previous voting issues such as the specific pen colors needed for a ballot to be valid, and warned citizens to be diligent about spotting disinformation. 

ai

Facebook And Instagram To Start Labeling Digitally Altered Content ‘Made With AI,’ Meta Says

Meta, the owner of Facebook and Instagram, announced that they would be making major changes to its policies on digitally created and/or altered media. 

Meta will start adding “Made with AI” labels to posts that use artificial intelligence to create photos, videos, and audio published on Facebook and Instagram. The apps will begin adding this label in May. 

Vice president of content policy, Monika Bickert, stated in a blogpost that Meta would “apply separate and more prominent labels to digitally altered media that poses a particularly high risk of materially deceiving the public on a matter of importance, regardless of whether the content was created using AI or other tools,” according to the Guardian

A spokesperson also stated that Meta will begin applying more prominent high-risk labels immediately. 

Embed from Getty Images

This approach will overall shift the way Meta treats manipulated content. Instead of removing the content altogether, posts made to Facebook and Instagram will not provide viewers with the information about how the image was edited.

A company spokesperson said the “labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.”

In February, Meta’s oversight board said the company’s existing rules on manipulated media were “incoherent” after reviewing a video of President Joe Biden posted on Facebook last year that had been digitally altered to make it seem as though the president was acting inappropriately.

The board said the “policy should also apply to non-AI content, which is not necessarily any less misleading than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did,” according to the Guardian.

tiktok

America’s Still Moving To Ban TikTok 

Last week, a key house committee introduced and approved a bill that is targeting the social media platform TikTok. The full House is set to vote this week potentially, and the White House has stated that President Joe Biden is also prepared to sign it, according to reports from CNN.

The bill itself, if fully approved, would give TikTok about five months to separate from its Chinese parent company ByteDance. If they refuse, app stores in the US will be prohibited from hosting the app on their platforms. 

Besides TikTok, the bill will also restrict other apps that are allegedly controlled by foreign adversaries like China, Iran, Russia, or North Korea. The bill would also set up a process for Biden, and future presidents to identify apps that should be banned under the specific legislation. 

Embed from Getty Images

Any app store that violates said legislation could be fined based on the number of users of the banned apps; specifically a fine of $5,000 per user of the banned app. For example, if the bill passes and Apple or Google decide to keep TikTok on its app stores, they could face fines up to $850 billion. 

One of the bill’s lead cosponsors, Wisconsin Republican Rep. Mike Gallagher, says “the bill does not ban TikTok; it simply offers TikTok the choice to be divested.”

TikTok has responded to this recent bill’s momentum, stating that it’s an attack on the First Amendment rights of its users, according to CNN. It’s even launched a call-to-action campaign within the app itself, urging users to call their states representatives in Washington to oppose the bill. Multiple congressional offices have already stated that they’ve been “flooded” with calls. 

In a statement, TikTok said: 

“The government is attempting to strip 170 million Americans of their Constitutional right to free expression. This will damage millions of businesses, deny artists an audience, and destroy the livelihoods of countless creators across the country.”

Lawmakers have long been alleging that TikTok poses a national security threat because the government in China can use its intelligence laws against ByteDance to force them to hand over the data of US TikTok users. If done, that information can then be potentially used to identify intelligence targets or enable disinformation or propaganda campaigns. 

Embed from Getty Images

The US government has not yet presented any evidence that China has accessed user data from TikTok, and according to reports, cybersecurity experts have stated that it still remains a hypothetical scenario. 

During the Trump administration, there was a major effort to ban TikTok, however, others debated whether or not the president had the power to ban a foreign-owned social media app. With this new congressional legislation, the president would have clear, new authorities to do that. 

With the speed in which House leaders are promising a floor vote, it can be assumed that they’re confident in the bill’s clearance. There is still not a lot of information regarding if the bill will have a chance in the Senate. 

Gallagher stated that the bill will likely fall to the Senate Commerce Committee. Senator Maria Cantwell, who chairs the Commerce Committee, told CNN that she will be talking to her “Senate and House colleagues to try to find a path forward that is constitutional and protects civil liberties.”

Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, said that “passing a nationwide privacy law regulating how all companies, not just TikTok, handle Americans’ data would lead to the same result without raising First Amendment concerns.” 

“By that precedent, it would be unconstitutional for the government to ban TikTok even if it were blatantly a direct mouthpiece for the Chinese government,” Jaffer said.

“If you give the government the power to restrict Americans’ access to propaganda, then you’ve given the government the power to restrict Americans’ access to anything the government deems to be propaganda.”

tesla

Tesla Recalls Over 2 Million Vehicles In The US Due To Warning Light Issues 

Tesla is recalling over 2 million vehicles in the US, nearly all the vehicles sold in the US, due to some warning lights on the instrument panel being too small. The recall was announced last Friday by the National Highway Traffic Safety Administration (NHTSA), in what many are calling a “sign of stepped-up scrutiny of the electric vehicle maker,” according to the Associated Press. 

The Administration also stated that they are upgrading their 2023 Tesla investigation regarding steering issues into an engineering analysis, which could lead to yet another recall. 

The documents posted on Friday by the NHTSA state the warning light recall specifically will be done with an online software update. The models included in this recall are the 2012 through 2023 Model S, the 2016 through 2023 Model X, the 2017 through 2023 Model 3, the 2019 through 2024 Model Y and the 2024 Cybertruck.

The agency says that the “brake, park and antilock brake warning lights have a smaller font size than required by federal safety standards. That can make critical safety information hard to read, increasing the risk of a crash.”

Embed from Getty Images

NHTSA stated that they found the issue in a routine safety compliance audit performed on January 8th. Back in December, the NHTSA put pressure on Tesla to recall more than 2 million vehicles to update their software and fix a defective system that is meant to guarantee drivers are paying attention to the road when using Autopilot. 

That specific recall inquiry came after a two-year investigation performed by the NHTSA based on a series of crashes that happened when the Autopilot feature was engaged, and some of those crashes were unfortunately fatal. 

The Autopilot system can allegedly be “inadequate and can lead to foreseeable misuse of the system. The added controls and alerts further encourage the driver to adhere to their continuous driving responsibility,” according to the agency. 

In February 2023, NHTSA also encouraged Tesla to recall over 360,000 vehicles that had its “Full Self-Driving” system because that system can be shaky around intersections and don’t always follow speed limits. This recall was also a part of a larger investigation into Tesla and its automated driving systems in general. 

Tesla is also recalling over 1.6 million vehicles in their Model S, X, 3, and Y series that were exported to China, over problems with their automatic assisted steering and door latch controls. 

Agency documents state that Tesla drivers are reporting a loss of control over their steering which is often accompanied by messages that show power assisted steering has been reduced or disabled completely.

ai

UK Artists Potentially Joining Lawsuit Against Midjourney Over Use Of Their Work To Train AI Software 

Midjourney is one of the many new image generators available to the public online that uses artificial intelligence to generate a given image, or series of images, based on what the user enters into the given prompts. 

AI technology has been on the rise within the past few years, and has officially entered the mainstream as of late. The ways in which AI gathers information to be used for its specific purpose, however, have been called out for being unethical, or straight up theft from various writers, artists, and creatives in general who are unknowingly having their works used to train these systems. 

With Midjourney specifically, there has recently been a list of around 16,000 artists whose work has been used to train Midjourney’s AI. Some of these artists include Bridget Riley, Damien Hirst, Rachel Whiteread, Tracey Emin, David Hockney, and Anish Kapoor, according to the Guardian

Embed from Getty Images

UK artist’s have now contacted lawyers in the US to discuss joining a class action lawsuit against Midjourney and other AI companies involved in similar practices. 

Tim Flach, the president of the Association of Photographers and a photographer himself who was included on the list of 16,000 stated the importance of collaboration when it comes to battling AI programs and companies. 

“What we need to do is come together. This public showing of this list of names is a great catalyst for artists to come together and challenge it. I personally would be up for doing that.”

The list of names was released in a 24-page document that was used within the class action lawsuit filed by 10 American artists in California. The lawsuit in particular is against Midjourney, Stability AI, Runway AI, and DevianArt. 

According to Matthew Butterick, one of the lawyers representing the artists, stated that since filing, they’ve received interest from artists all over the world to join the suit. The tech firms involved have until February 8th to respond to the claim, which states the following: 

“Though [the] defendants like to describe their AI image products in lofty terms, the reality is grubbier and nastier: AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists.”

Embed from Getty Images

The lawsuit also stated that with Midjourney specifically, users are allowed, and encouraged, to specify any artist’s personal style when entering their description for the image they want to generate using AI. 

“The impersonation of artists and their style is probably the thing that will stick, because if you take an artist’s style you’re effectively robbing them of their livelihood,”  Flach said. 

The Design and Artists Copyright Society (DACS) took a survey last week of 1,000 artists and agents regarding the lack of legal regulation over generative AI technologies. The survey showed that 89% of the respondents want the UK government to regulate generative AI, while 22% discovered that their own works have been used to train AI. 

“If we’d done our survey now [after the list had come out] we probably would have had a stronger response. A lot of people didn’t know whether their works had been used. There’s a transparency we didn’t have a couple of months ago,” said Reema Selhi, head of policy at DACS.

Selhi continued to discuss how ministers initially wanted to open up copyright laws to actually make it easier for companies to train AIs without needing permission in regards to the artists’ works they learn it from. 

“We’ve had such a great strength of feeling from people that this is completely copyright infringement. Permission hasn’t been sought. They haven’t given consent. They haven’t been remunerated. They haven’t been credited.”

DACS is actively pushing for some form of official licensing or royalty system to be put in place so artists have more control over where and how their works are produced, or at the very least receive some sort of compensation.

tiktok

Indiana State’s Lawsuit Against TikTok Over Child Safety Dismissed By Judge

A judge has dismissed a lawsuit in Indiana state that was filed against TikTok over accusations of making false claims about safety of children on the app and age-appropriate content. 

According to CNN, Judge Jennifer DeGroote of Allen County Superior Court in Fort Wayne, Indiana stated that the court lacks “personal jurisdiction” over the social media platform, and that downloading an app for free is not considered “consumer transaction” under the Indiana Deceptive Consumer Sales Act.” 

Embed from Getty Images

The lawsuit was initially filed in December 2022, and was originally two separate lawsuits that were later consolidated. This was the first lawsuit filed by a state against TikTok, however, similar lawsuits are currently active in other states. 

“[The state respects the ruling] but we also disagree with it on various points and are considering appellate options at this time,” the office of Indiana Attorney General Todd Rokita said in a statement to CNN

“We were the first state to file suit against TikTok, but not the last, and it’s reassuring to see others take up this ongoing fight against a foreign Big Tech threat, in any jurisdiction.”

Rokita also stated that TikTok is a “malicious and menacing threat unleashed on unsuspecting Indiana consumers by a Chinese company that knows full well the harms it inflicts on users.”

Embed from Getty Images

The lawsuit alleged that the social media platform advertises to younger individuals with the sentiment that it’s a safe app, however, the app itself easily grants access for users to see inappropriate content such as nudity, profanity, and drug and alcohol use. 

The lawsuit also stated that TikTok collects sensitive data from its users and uses their personal information. “[TikTok] has deceived those consumers to believe that this information is protected from the Chinese government and Communist Party.”

Indiana also has been involved in a lawsuit against Meta, the parent company of Instagram, for its addictive nature and harm to young users’ mental health. Dozens of other states have filed similar lawsuits against Meta as well. 

Indiana was also one of the first states to ban TikTok on any government-issued devices over “the threat of gaining access to critical US information and infrastructure.”

google

Google To Potentially Invest Hundreds Of Millions Into Character.AI Startup 

Google is currently in conversation to invest in Character.AI, an artificial intelligence chatbot platform startup. According to CTech News, Character.AI was created by Noam Shazeer and Daniel De Freitas, two former employees of Google Brain. 

Google is prepared to invest “hundreds of millions of dollars” into Character.AI as it continues to train chatbot models to talk to users, according to sources who spoke to Reuters. 

Embed from Getty Images

Character.AI and Google already have a standing relationship in which they use Google’s cloud services and Tensor Processing Units to train its chatbot models, so this investment would deepen that partnership. 

Character.AI allows users to log in and choose from a variety of celebrities, movie characters, creatures, etc. to chat with. Users can even create their own character chatbot to speak with. Subscription models cost $9.99 a month, but the platform is also free to use. 

According to data from Similarweb, reported by CalTech, “Character.AI’s chatbots, with various roles and tones to choose from, have appealed to users ages 18 to 24, who contributed about 60% of its website traffic. 

Embed from Getty Images

The demographic is helping the company position itself as the purveyor of more fun personal AI companions, compared to other AI chatbots from OpenAI’s ChatGPT and Google’s Bard.”

Within the first six months of launching, Character.AI saw about 100 million visits every month. 

Reuters wrote that “The startup is also in talks to raise equity funding from venture capital investors, which could value the company at over $5 billion.

In March, it raised $150 million in a funding round led by Andreessen Horowitz at $1 billion valuation.

Google has been investing in AI startups, including $2 billion for model maker Anthropic in the form of convertible notes, on top of its earlier equity investment.”

google

Google To Replace Passwords With Passkeys In New Update 

On Tuesday, Google announced an update to the way they plan to enforce cybersecurity: by replacing passwords with passkeys instead. According to Gizmodo Magazine, Google claimed it’s planning to “make passwords a rarity, and eventually obsolete.” 

Passkeys have been around for a little while now. A passkey is defined as any sort of short form method of unlocking your devices or accounts, for example, using your fingerprint or pin code to unlock your phone. 

Embed from Getty Images

The biggest advantage of these types of cybersecurity systems is that hackers will need your entire device, not just your password, to gain access to your accounts; as most passkeys only work on the single device they’re implemented on. 

According to Google, passkeys are 40% faster than passwords as well, and are popular among consumers due to the fact that they don’t need to memorize long randomized passwords full of various letters, numbers, and symbols.

According to Gizmodo, technology experts have been predicting for the past decade that the use of passwords for cybersecurity would likely become obsolete, however, every year that sentiment has been proven wrong, especially with the rise of new streaming services and social media platforms in the past five years alone. 

Embed from Getty Images

Google, however, is the platform that has the best chance at being the catalyst for this transition. As one of the biggest tech companies in the world, Google has the potential to set a new precedent when it comes to how the average person protects their presence online. 

Google has stated that when a user logs into their account, a new prompt will appear asking if the user wants to create and use passkeys, allowing them to “skip passwords when possible” if you go into the settings of your Google account, and use other features. 

With these new changes, it’s still not likely that passwords will go away anytime soon. 

“The tech industry has a lot of work to tackle before you can forget all your passwords, but that impossible dream is now a massive step closer to reality,” wrote Thomas Germain.

iphone

Apple Addresses Reports of iPhone 15 Overheating

Apple acknowledged user reports of overheating on some of its new iPhone 15 models, stating that a software glitch and certain apps are to blame.

The latest iPhone hit the shelves on September 22. Shortly after, users voiced their concerns about the device becoming uncomfortable to hold after limited use, sometimes reaching temperatures of more than 110 degrees Fahrenheit (43 degrees Celsius).

In a statement to CNN, Apple shared that it has “identified a few conditions which can cause an iPhone to run warmer than expected.”

Overheating can occur with the use of some recently updated third-party apps. The device may also run hot due to “increased background activity” the first few days after setup or a system reset.

According to Apple, the overheating problems currently affecting some iPhone models do not pose a safety risk and will not have any effect on the devices’ long-term performance. Apple also highlighted the fact that iPhones have thermal protections built in to help with overheating.

Embed from Getty Images

The company is working with the developers of third-party apps whose recent updates were running in ways “causing them to overload the system.” Instagram, ridesharing service Uber, and racing game Asphalt 9 are just a few examples.

“We’re working with these app developers on fixes that are in the process of rolling out.”

Last week, Meta updated its Instagram app for the newest version of iOS to fix an issue where the app would cause the iPhone to overheat.

In addition, Apple acknowledged a bug in iOS 17 that could be affecting some users. It plans to release a software update to fix the problem, but no date has been set for the update’s release.

On Apple’s support page, users are cautioned that their devices may overheat when performing tasks such as restoring from a backup, using apps with heavy graphics processing, streaming high-quality video, and charging wirelessly.

Embed from Getty Images

“These conditions are normal, and your device will return to a regular temperature when the process is complete or when you finish your activity. If your device doesn’t display a temperature warning, you can keep using your device.”

Despite the recent complaints, consumer demand for the iPhone 15 appears strong. Analysts reported that iPhone 15 pre-orders were doing better than initially expected in the days leading up to the release date, with robust demand also for the premium iPhone 15 Pro and the iPhone 15 Pro Max.

In August, Apple reported that sales had dropped for a third consecutive quarter, totaling around $39.7 billion, shortly before the release of the new iPhones. The figure represents a decrease of about 2% in revenue from the previous year as fewer people upgrade their devices.

About 250 million iPhones, according to estimates, have not been updated in more than four years. A shift to a USB-C charging port and improvements to the processor and camera may incentivize users to upgrade this year.

The iPhone 15 Pro prices begin at $1,099, while the iPhone 15 Pro Max begins at $1,199. The base-level iPhone 15 goes for $799, while the Plus version, the iPhone 15 X, costs $899.

iphone

France To Allow Apple To Sell iPhone 12s Again After Meeting Radiation Standards

France may now allow Apple to sell their 12th generation iPhone model after weeks of the nation citing the device’s alleged high levels of electromagnetic radiation. Now, Apple has apparently met France’s electromagnetic radiation standards, allowing the company to start selling the phone again. 

Embed from Getty Images

France’s L’Agence Nationale des Fréquences, or ANFR, confirmed this week that the iPhone 12 met the standards after Apple issued a new software patch for the phone.

According to Gizmodo, the agency stated they tested the phone’s specific radiation absorption rate, which measures the level of radio waves emitted by the device, and found that the iPhone 12 was back in compliance with the nation’s standards after the patch. 

Once Apple patches all iPhone 12s, they will be able to sell the devices in France again. 

It was about two weeks ago that France initially put a stop to the sales of iPhone 12s throughout the country after the ANFR claimed the device didn’t meet radiation standards. Other nations such as Belgium and Germany followed suit and began testing the devices themselves as well. 

Embed from Getty Images

Apple has consistently claimed that the issue isn’t actually with their phones, but instead with France’s, and the EU in general’s, testing standards. They stated that the new software patch simply “accommodates” the testing protocols. 

According to Gizmodo, “The ANFR said the tests were performed by an accredited laboratory and found the SAR was 3.94 W/kg (watt per kilogram) after the update compared to 5.74 W/kg before the update. The measurements were taken when phones are kept close to the body, such as inside a user’s pocket.”

“France and other EU countries have placed limits on the amount of electromagnetic radiation that can be absorbed by a body. According to the French regulatory agency, the country monitors all waves between 100 kHz and 10 GHz,” the publication explained

According to the World Health Organization, there isn’t any definitive science that proves there are major health consequences to exposure from low-level electromagnetic fields and waves. However, the recent wave of 5G conspiracy theories spreading online hasn’t helped when it comes to people worrying about radiation in their personal devices.