EU Launching Formal Investigation Into Meta Regarding Election Misinformation Before June Polls Open 

The European Union (EU) is set to launch a formal investigation into Meta, the parent company of Facebook and Instagram. The investigation was prompted over the EU’s concerns that the tech giant isn’t doing enough to counter Russian disinformation ahead of the EU elections in June, according to reports

The EU is also likely to express their concerns regarding the lack of effective monitoring of election content, and the inadequate tools they use for flagging illegal content. 

Embed from Getty Images

Lisa O’Carroll, a correspondent for The Guardian, wrote that the European Commission is worried about Meta’s moderation system, claiming that it is not extensive enough to combat the presence of misinformation, and even suppresses voting. 

The Financial Times revealed that government officials are worried about how Meta is handling Russia’s specific efforts to undermine the upcoming elections. 

Meta’s plan to discontinue its CrowdTangle tool also has officials concerned. CrowdTangle is a public insights tool that allows researchers, journalists, and others within the EU to monitor in real time the spread of misinformation and any attempts to suppress voting. 

The EU currently has new laws in place that require tech companies to regulate their content and have systems in place to guard against any and all systemic risks involving election interference. 

Embed from Getty Images

“We have a well-established process for identifying and mitigating risks on our platforms. We look forward to continuing our cooperations with the European Commission and providing them with further details of this work,” a spokesperson for Meta stated

The commission recently carried out “stress tests” on all the major social media platforms as a means of determining if there were proper safeguards in place to prevent the spreading of misinformation. The tests involved a series of made-up scenarios that are based on past attempts at election manipulation, such as using deep fakes and speech suppression.

“The aim was to test platforms’ readiness to address manipulative behavior that could occur in the run-up to the elections, in particular the different manipulative tactics, techniques and procedures,” the commission stated.

This past Monday, parliament released official tips for voters in the upcoming elections, which will be taking place between June 6th and 9th. They cited previous voting issues such as the specific pen colors needed for a ballot to be valid, and warned citizens to be diligent about spotting disinformation. 


Underwater Drones To Be Used To Monitor Data In Earth’s Waters Amid Climate Change 

The company Aquaai has used its underwater drone technology to monitor water quality, fish health, and more in fresh and saltwater resources throughout California and Norway. Now, it’s hoping to utilize similar technology to take in water data in the Middle East amid the ongoing impacts of climate change on our planet and its vast water supply.


Facebook And Instagram To Start Labeling Digitally Altered Content ‘Made With AI,’ Meta Says

Meta, the owner of Facebook and Instagram, announced that they would be making major changes to its policies on digitally created and/or altered media. 

Meta will start adding “Made with AI” labels to posts that use artificial intelligence to create photos, videos, and audio published on Facebook and Instagram. The apps will begin adding this label in May. 

Vice president of content policy, Monika Bickert, stated in a blogpost that Meta would “apply separate and more prominent labels to digitally altered media that poses a particularly high risk of materially deceiving the public on a matter of importance, regardless of whether the content was created using AI or other tools,” according to the Guardian

A spokesperson also stated that Meta will begin applying more prominent high-risk labels immediately. 

Embed from Getty Images

This approach will overall shift the way Meta treats manipulated content. Instead of removing the content altogether, posts made to Facebook and Instagram will not provide viewers with the information about how the image was edited.

A company spokesperson said the “labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.”

In February, Meta’s oversight board said the company’s existing rules on manipulated media were “incoherent” after reviewing a video of President Joe Biden posted on Facebook last year that had been digitally altered to make it seem as though the president was acting inappropriately.

The board said the “policy should also apply to non-AI content, which is not necessarily any less misleading than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did,” according to the Guardian.


Scientists In Belgium Are Using AI To Make Their Beer Taste Better 

Researchers in Belgium are currently exploring how AI can be used to improve the taste of their beer, which is known for its high quality and long history.


America’s Still Moving To Ban TikTok 

Last week, a key house committee introduced and approved a bill that is targeting the social media platform TikTok. The full House is set to vote this week potentially, and the White House has stated that President Joe Biden is also prepared to sign it, according to reports from CNN.

The bill itself, if fully approved, would give TikTok about five months to separate from its Chinese parent company ByteDance. If they refuse, app stores in the US will be prohibited from hosting the app on their platforms. 

Besides TikTok, the bill will also restrict other apps that are allegedly controlled by foreign adversaries like China, Iran, Russia, or North Korea. The bill would also set up a process for Biden, and future presidents to identify apps that should be banned under the specific legislation. 

Embed from Getty Images

Any app store that violates said legislation could be fined based on the number of users of the banned apps; specifically a fine of $5,000 per user of the banned app. For example, if the bill passes and Apple or Google decide to keep TikTok on its app stores, they could face fines up to $850 billion. 

One of the bill’s lead cosponsors, Wisconsin Republican Rep. Mike Gallagher, says “the bill does not ban TikTok; it simply offers TikTok the choice to be divested.”

TikTok has responded to this recent bill’s momentum, stating that it’s an attack on the First Amendment rights of its users, according to CNN. It’s even launched a call-to-action campaign within the app itself, urging users to call their states representatives in Washington to oppose the bill. Multiple congressional offices have already stated that they’ve been “flooded” with calls. 

In a statement, TikTok said: 

“The government is attempting to strip 170 million Americans of their Constitutional right to free expression. This will damage millions of businesses, deny artists an audience, and destroy the livelihoods of countless creators across the country.”

Lawmakers have long been alleging that TikTok poses a national security threat because the government in China can use its intelligence laws against ByteDance to force them to hand over the data of US TikTok users. If done, that information can then be potentially used to identify intelligence targets or enable disinformation or propaganda campaigns. 

Embed from Getty Images

The US government has not yet presented any evidence that China has accessed user data from TikTok, and according to reports, cybersecurity experts have stated that it still remains a hypothetical scenario. 

During the Trump administration, there was a major effort to ban TikTok, however, others debated whether or not the president had the power to ban a foreign-owned social media app. With this new congressional legislation, the president would have clear, new authorities to do that. 

With the speed in which House leaders are promising a floor vote, it can be assumed that they’re confident in the bill’s clearance. There is still not a lot of information regarding if the bill will have a chance in the Senate. 

Gallagher stated that the bill will likely fall to the Senate Commerce Committee. Senator Maria Cantwell, who chairs the Commerce Committee, told CNN that she will be talking to her “Senate and House colleagues to try to find a path forward that is constitutional and protects civil liberties.”

Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, said that “passing a nationwide privacy law regulating how all companies, not just TikTok, handle Americans’ data would lead to the same result without raising First Amendment concerns.” 

“By that precedent, it would be unconstitutional for the government to ban TikTok even if it were blatantly a direct mouthpiece for the Chinese government,” Jaffer said.

“If you give the government the power to restrict Americans’ access to propaganda, then you’ve given the government the power to restrict Americans’ access to anything the government deems to be propaganda.”


US Company, Intuitive Machines, Wants To Bring America Back To The Moon 

Intuitive Machines is a US company that is trying to get back to the moon in what would be the first American lunar landing in more than 50 years.


Tesla Recalls Over 2 Million Vehicles In The US Due To Warning Light Issues 

Tesla is recalling over 2 million vehicles in the US, nearly all the vehicles sold in the US, due to some warning lights on the instrument panel being too small. The recall was announced last Friday by the National Highway Traffic Safety Administration (NHTSA), in what many are calling a “sign of stepped-up scrutiny of the electric vehicle maker,” according to the Associated Press. 

The Administration also stated that they are upgrading their 2023 Tesla investigation regarding steering issues into an engineering analysis, which could lead to yet another recall. 

The documents posted on Friday by the NHTSA state the warning light recall specifically will be done with an online software update. The models included in this recall are the 2012 through 2023 Model S, the 2016 through 2023 Model X, the 2017 through 2023 Model 3, the 2019 through 2024 Model Y and the 2024 Cybertruck.

The agency says that the “brake, park and antilock brake warning lights have a smaller font size than required by federal safety standards. That can make critical safety information hard to read, increasing the risk of a crash.”

Embed from Getty Images

NHTSA stated that they found the issue in a routine safety compliance audit performed on January 8th. Back in December, the NHTSA put pressure on Tesla to recall more than 2 million vehicles to update their software and fix a defective system that is meant to guarantee drivers are paying attention to the road when using Autopilot. 

That specific recall inquiry came after a two-year investigation performed by the NHTSA based on a series of crashes that happened when the Autopilot feature was engaged, and some of those crashes were unfortunately fatal. 

The Autopilot system can allegedly be “inadequate and can lead to foreseeable misuse of the system. The added controls and alerts further encourage the driver to adhere to their continuous driving responsibility,” according to the agency. 

In February 2023, NHTSA also encouraged Tesla to recall over 360,000 vehicles that had its “Full Self-Driving” system because that system can be shaky around intersections and don’t always follow speed limits. This recall was also a part of a larger investigation into Tesla and its automated driving systems in general. 

Tesla is also recalling over 1.6 million vehicles in their Model S, X, 3, and Y series that were exported to China, over problems with their automatic assisted steering and door latch controls. 

Agency documents state that Tesla drivers are reporting a loss of control over their steering which is often accompanied by messages that show power assisted steering has been reduced or disabled completely.


UK Artists Potentially Joining Lawsuit Against Midjourney Over Use Of Their Work To Train AI Software 

Midjourney is one of the many new image generators available to the public online that uses artificial intelligence to generate a given image, or series of images, based on what the user enters into the given prompts. 

AI technology has been on the rise within the past few years, and has officially entered the mainstream as of late. The ways in which AI gathers information to be used for its specific purpose, however, have been called out for being unethical, or straight up theft from various writers, artists, and creatives in general who are unknowingly having their works used to train these systems. 

With Midjourney specifically, there has recently been a list of around 16,000 artists whose work has been used to train Midjourney’s AI. Some of these artists include Bridget Riley, Damien Hirst, Rachel Whiteread, Tracey Emin, David Hockney, and Anish Kapoor, according to the Guardian

Embed from Getty Images

UK artist’s have now contacted lawyers in the US to discuss joining a class action lawsuit against Midjourney and other AI companies involved in similar practices. 

Tim Flach, the president of the Association of Photographers and a photographer himself who was included on the list of 16,000 stated the importance of collaboration when it comes to battling AI programs and companies. 

“What we need to do is come together. This public showing of this list of names is a great catalyst for artists to come together and challenge it. I personally would be up for doing that.”

The list of names was released in a 24-page document that was used within the class action lawsuit filed by 10 American artists in California. The lawsuit in particular is against Midjourney, Stability AI, Runway AI, and DevianArt. 

According to Matthew Butterick, one of the lawyers representing the artists, stated that since filing, they’ve received interest from artists all over the world to join the suit. The tech firms involved have until February 8th to respond to the claim, which states the following: 

“Though [the] defendants like to describe their AI image products in lofty terms, the reality is grubbier and nastier: AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists.”

Embed from Getty Images

The lawsuit also stated that with Midjourney specifically, users are allowed, and encouraged, to specify any artist’s personal style when entering their description for the image they want to generate using AI. 

“The impersonation of artists and their style is probably the thing that will stick, because if you take an artist’s style you’re effectively robbing them of their livelihood,”  Flach said. 

The Design and Artists Copyright Society (DACS) took a survey last week of 1,000 artists and agents regarding the lack of legal regulation over generative AI technologies. The survey showed that 89% of the respondents want the UK government to regulate generative AI, while 22% discovered that their own works have been used to train AI. 

“If we’d done our survey now [after the list had come out] we probably would have had a stronger response. A lot of people didn’t know whether their works had been used. There’s a transparency we didn’t have a couple of months ago,” said Reema Selhi, head of policy at DACS.

Selhi continued to discuss how ministers initially wanted to open up copyright laws to actually make it easier for companies to train AIs without needing permission in regards to the artists’ works they learn it from. 

“We’ve had such a great strength of feeling from people that this is completely copyright infringement. Permission hasn’t been sought. They haven’t given consent. They haven’t been remunerated. They haven’t been credited.”

DACS is actively pushing for some form of official licensing or royalty system to be put in place so artists have more control over where and how their works are produced, or at the very least receive some sort of compensation.


The New York Times Is Suing Microsoft And OpenAI Over Copyright Infringement 

OpenAI and Microsoft are being sued by the New York Times over copyright infringement, alleging that the two companies’ artificial intelligence technology has illegally copied millions of articles from the Times to train AI services like ChatGPT.


Indiana State’s Lawsuit Against TikTok Over Child Safety Dismissed By Judge

A judge has dismissed a lawsuit in Indiana state that was filed against TikTok over accusations of making false claims about safety of children on the app and age-appropriate content. 

According to CNN, Judge Jennifer DeGroote of Allen County Superior Court in Fort Wayne, Indiana stated that the court lacks “personal jurisdiction” over the social media platform, and that downloading an app for free is not considered “consumer transaction” under the Indiana Deceptive Consumer Sales Act.” 

Embed from Getty Images

The lawsuit was initially filed in December 2022, and was originally two separate lawsuits that were later consolidated. This was the first lawsuit filed by a state against TikTok, however, similar lawsuits are currently active in other states. 

“[The state respects the ruling] but we also disagree with it on various points and are considering appellate options at this time,” the office of Indiana Attorney General Todd Rokita said in a statement to CNN

“We were the first state to file suit against TikTok, but not the last, and it’s reassuring to see others take up this ongoing fight against a foreign Big Tech threat, in any jurisdiction.”

Rokita also stated that TikTok is a “malicious and menacing threat unleashed on unsuspecting Indiana consumers by a Chinese company that knows full well the harms it inflicts on users.”

Embed from Getty Images

The lawsuit alleged that the social media platform advertises to younger individuals with the sentiment that it’s a safe app, however, the app itself easily grants access for users to see inappropriate content such as nudity, profanity, and drug and alcohol use. 

The lawsuit also stated that TikTok collects sensitive data from its users and uses their personal information. “[TikTok] has deceived those consumers to believe that this information is protected from the Chinese government and Communist Party.”

Indiana also has been involved in a lawsuit against Meta, the parent company of Instagram, for its addictive nature and harm to young users’ mental health. Dozens of other states have filed similar lawsuits against Meta as well. 

Indiana was also one of the first states to ban TikTok on any government-issued devices over “the threat of gaining access to critical US information and infrastructure.”