Posts

ai

Facebook And Instagram To Start Labeling Digitally Altered Content ‘Made With AI,’ Meta Says

Meta, the owner of Facebook and Instagram, announced that they would be making major changes to its policies on digitally created and/or altered media. 

Meta will start adding “Made with AI” labels to posts that use artificial intelligence to create photos, videos, and audio published on Facebook and Instagram. The apps will begin adding this label in May. 

Vice president of content policy, Monika Bickert, stated in a blogpost that Meta would “apply separate and more prominent labels to digitally altered media that poses a particularly high risk of materially deceiving the public on a matter of importance, regardless of whether the content was created using AI or other tools,” according to the Guardian

A spokesperson also stated that Meta will begin applying more prominent high-risk labels immediately. 

Embed from Getty Images

This approach will overall shift the way Meta treats manipulated content. Instead of removing the content altogether, posts made to Facebook and Instagram will not provide viewers with the information about how the image was edited.

A company spokesperson said the “labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual-reality headsets, are covered by different rules.”

In February, Meta’s oversight board said the company’s existing rules on manipulated media were “incoherent” after reviewing a video of President Joe Biden posted on Facebook last year that had been digitally altered to make it seem as though the president was acting inappropriately.

The board said the “policy should also apply to non-AI content, which is not necessarily any less misleading than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did,” according to the Guardian.

Billie Eilish And Stevie Wonder Among 200+ Artists Demanding Protection Against Predatory Use Of AI 

A group of more than 200 high-profile musicians and estates have signed an official open letter demanding protections against the predatory use of Artificial Intelligence (AI) which mimics human artists’ voices, songs, and overall likeness, according to new reports. 

ai

UK Artists Potentially Joining Lawsuit Against Midjourney Over Use Of Their Work To Train AI Software 

Midjourney is one of the many new image generators available to the public online that uses artificial intelligence to generate a given image, or series of images, based on what the user enters into the given prompts. 

AI technology has been on the rise within the past few years, and has officially entered the mainstream as of late. The ways in which AI gathers information to be used for its specific purpose, however, have been called out for being unethical, or straight up theft from various writers, artists, and creatives in general who are unknowingly having their works used to train these systems. 

With Midjourney specifically, there has recently been a list of around 16,000 artists whose work has been used to train Midjourney’s AI. Some of these artists include Bridget Riley, Damien Hirst, Rachel Whiteread, Tracey Emin, David Hockney, and Anish Kapoor, according to the Guardian

Embed from Getty Images

UK artist’s have now contacted lawyers in the US to discuss joining a class action lawsuit against Midjourney and other AI companies involved in similar practices. 

Tim Flach, the president of the Association of Photographers and a photographer himself who was included on the list of 16,000 stated the importance of collaboration when it comes to battling AI programs and companies. 

“What we need to do is come together. This public showing of this list of names is a great catalyst for artists to come together and challenge it. I personally would be up for doing that.”

The list of names was released in a 24-page document that was used within the class action lawsuit filed by 10 American artists in California. The lawsuit in particular is against Midjourney, Stability AI, Runway AI, and DevianArt. 

According to Matthew Butterick, one of the lawyers representing the artists, stated that since filing, they’ve received interest from artists all over the world to join the suit. The tech firms involved have until February 8th to respond to the claim, which states the following: 

“Though [the] defendants like to describe their AI image products in lofty terms, the reality is grubbier and nastier: AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists.”

Embed from Getty Images

The lawsuit also stated that with Midjourney specifically, users are allowed, and encouraged, to specify any artist’s personal style when entering their description for the image they want to generate using AI. 

“The impersonation of artists and their style is probably the thing that will stick, because if you take an artist’s style you’re effectively robbing them of their livelihood,”  Flach said. 

The Design and Artists Copyright Society (DACS) took a survey last week of 1,000 artists and agents regarding the lack of legal regulation over generative AI technologies. The survey showed that 89% of the respondents want the UK government to regulate generative AI, while 22% discovered that their own works have been used to train AI. 

“If we’d done our survey now [after the list had come out] we probably would have had a stronger response. A lot of people didn’t know whether their works had been used. There’s a transparency we didn’t have a couple of months ago,” said Reema Selhi, head of policy at DACS.

Selhi continued to discuss how ministers initially wanted to open up copyright laws to actually make it easier for companies to train AIs without needing permission in regards to the artists’ works they learn it from. 

“We’ve had such a great strength of feeling from people that this is completely copyright infringement. Permission hasn’t been sought. They haven’t given consent. They haven’t been remunerated. They haven’t been credited.”

DACS is actively pushing for some form of official licensing or royalty system to be put in place so artists have more control over where and how their works are produced, or at the very least receive some sort of compensation.

ai

40% Of Jobs Worldwide Could Be Affected By Artificial Intelligence, IMF Says 

According to the International Monetary Fund (IMF), around 40% of jobs globally could be affected by the rise in the use of artificial intelligence (AI). The IMF warned that these recent trends in AI could deepen the inequality that’s already present in the tech industry, and other industries where AI is being used. 

IMF chief Kristalina Georgieva published an official blog post on Sunday in which she called on government powers to establish effective “social safety nets and offer retraining programs” to counter the negative impacts of AI, according to CNN.

Embed from Getty Images

“In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” she wrote.

Georgieva published the post ahead of the annual World Economic Forum meeting in Switzerland, where the topic of AI is set to be a big topic of conversation. 

Sam Altman, the chief executive of ChatGPT-maker OpenAI, and Satya Nadella, the CEO of Microsoft, will also speak at the Forum later this week and be involved in a debate being called “Generative AI: Steam engine of the Fourth Industrial Revolution?”

“As AI continues to be adapted by more workers and businesses, it’s expected to both help and hurt the human workforce,” Georgieva said in her blog, according to CNN.

Embed from Getty Images

Georgieva also stated that the negative impacts of AI are expected to hit nations with advanced economies. 

She explained that in more developed economies, up to 60% of jobs could potentially be impacted by AI, but half of those jobs could benefit from the productivity benefits of AI. 

“For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring. In the most extreme cases, some of these jobs may disappear,” wrote Georgieva.

CNN reported that within emerging markets, places with sustained economic growth, 40% of jobs are expected to be impacted by AI. In lower income nations, places with developing economies, 26% of jobs are expected to be impacted by AI. 

“Many of these countries don’t have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality,” stated Georgieva.

google

Google To Potentially Invest Hundreds Of Millions Into Character.AI Startup 

Google is currently in conversation to invest in Character.AI, an artificial intelligence chatbot platform startup. According to CTech News, Character.AI was created by Noam Shazeer and Daniel De Freitas, two former employees of Google Brain. 

Google is prepared to invest “hundreds of millions of dollars” into Character.AI as it continues to train chatbot models to talk to users, according to sources who spoke to Reuters. 

Embed from Getty Images

Character.AI and Google already have a standing relationship in which they use Google’s cloud services and Tensor Processing Units to train its chatbot models, so this investment would deepen that partnership. 

Character.AI allows users to log in and choose from a variety of celebrities, movie characters, creatures, etc. to chat with. Users can even create their own character chatbot to speak with. Subscription models cost $9.99 a month, but the platform is also free to use. 

According to data from Similarweb, reported by CalTech, “Character.AI’s chatbots, with various roles and tones to choose from, have appealed to users ages 18 to 24, who contributed about 60% of its website traffic. 

Embed from Getty Images

The demographic is helping the company position itself as the purveyor of more fun personal AI companions, compared to other AI chatbots from OpenAI’s ChatGPT and Google’s Bard.”

Within the first six months of launching, Character.AI saw about 100 million visits every month. 

Reuters wrote that “The startup is also in talks to raise equity funding from venture capital investors, which could value the company at over $5 billion.

In March, it raised $150 million in a funding round led by Andreessen Horowitz at $1 billion valuation.

Google has been investing in AI startups, including $2 billion for model maker Anthropic in the form of convertible notes, on top of its earlier equity investment.”

ai

President Biden Issues Executive Order for AI Oversight

On Monday, President Joe Biden signed an executive order covering a wide range of topics related to artificial intelligence, paving the way for new government regulations and funding.

The 111-page order covers multiple facets of AI and areas of concern or development, including civil rights, cybersecurity, discrimination, global competition, and a push for establishing federal AI jobs.

A senior White House official, who wished to remain anonymous, reportedly told NBC News that the potential uses of AI are so vast that effective regulations must cover a lot of ground. He also underscored the need for “significant bipartisan legislation.”

“AI policy is like running into a decathlon, and there’s 10 different events here, and we don’t have the luxury of just picking ‘we’re just going to do safety’ or ‘we’re just going to do equity’ or ‘we’re just going to do privacy.’ You have to do all of these things.”

Embed from Getty Images

The order expands on a July nonbinding agreement between seven of the most prominent U.S. technology companies developing AI. The agreement required the companies to hire outside experts to identify weaknesses in their systems. The government can legally require companies to disclose the results of those safety tests under the Defense Production Act.

The Department of Commerce will also be required to develop guidelines for properly “watermarking” AI content, such as “deepfake” videos and ChatGPT-generated essays.

In an interview with NBC News, the Stanford Institute for Human-Centered Artificial Intelligence co-director Fei-Fei Li stressed the importance of government funding for AI to solve society’s pressing issues.

“The public sector holds a unique opportunity in terms of data and interdisciplinary talent to cure cancer, cure rare diseases, to map out biodiversity at a global scale, to understand and predict wildfires, to find climate solutions, to supercharge our teachers. There’s so much the public sector can do, but all of this is right now starved because we are severely lacking in resources.”

Embed from Getty Images

Some of the other topics covered in the order are geared toward anticipating and mitigating real-world problems that may arise from the widespread implementation of AI.

For instance, it asks the Department of Labor to address the potential for AI to cause job losses; the Consumer Financial Protection Bureau and Department of Housing and Urban Development to address how AI may exacerbate discrimination in banking and housing sectors; and the Office of Management and Budget, and others, to determine how the government can use AI without jeopardizing privacy rights.

The AI Now Institute managing director, Sarah Myers West, praised President Biden for including ethical concerns in the executive order. The nonprofit focuses on the societal implications of artificial intelligence use.

“It’s great to see the White House set the tone on the issues that matter most to the public: labor, civil rights, protecting privacy, promoting competition. This underscores you can’t deal with the future risks of AI without adequately dealing with the present. The key to looking forward will be to ensure strong enforcement as companies attempt to set a self-regulatory tone: industry cannot be left to lead the conversation on how to adequately address the effects of AI on the broader public.”

ai

Amazon Invests up to $4 Billion in OpenAI Rival Anthropic in Exchange for Minority Stake

On Monday, Amazon announced it will invest up to $4 billion into the artificial intelligence company Anthropic. In exchange, Amazon will gain partial ownership, and Anthropic will use the company’s cloud computing platform, Amazon Web Services (AWS), more widely.

The growing relationship between the two firms is an example of how some large tech companies with extensive cloud computing resources are using those assets to strengthen their position in the artificial intelligence industry.

According to a statement released by Amazon, Anthropic will use AWS as its primary cloud provider, using the cloud platform to do most of its AI model development and research into AI safety. Anthropic will also have access to Amazon’s suite of in-house AI chips.

“AWS will become Anthropic’s primary cloud provider for mission-critical workloads, including safety research and future foundation model development. Anthropic plans to run the majority of its workloads on AWS, further providing Anthropic with the advanced technology of the world’s leading cloud provider.”

In addition, Anthropic has committed to making its AI models available to AWS users long-term, providing them with early access to features, including the ability to customize Anthropic models for their own purposes.

Embed from Getty Images

“With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature.”

Amazon Web Services (AWS) customers already have access to Anthropic’s AI models through Amazon Bedrock, the tech giant’s storefront for AI goods. Bedrock not only supports Amazon’s own models but also those from third-party developers such as  Stability AI and AI21 Labs.

In a press release, the co-founder and CEO of Anthropic, Dario Amodei, said that his company is “excited to use AWS’s Trainium chips to develop future foundation models.”

“Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers. By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

Embed from Getty Images

Anthropic stated that Amazon’s minority stake would not alter the company’s corporate governance structure or its dedication to the ethical advancement of artificial intelligence.

“Our corporate governance structure remains unchanged, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy. As outlined in this policy, we will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems.”

Several cloud market leaders, like Microsoft and now Amazon, have made investments into artificial intelligence technology. OpenAI, the company that developed ChatGPT, received $1 billion from Microsoft in 2019. Microsoft recently also invested $10 billion in OpenAI and is striving to integrate OpenAI’s technology into consumer-facing Microsoft products such as Bing.

This deal is Amazon’s most recent push into the artificial intelligence space to compete with industry leaders like Microsoft and Alphabet’s Google.

How AI Is Helping Potential Homeowners Find The Best Time To Buy Their Dream Home

According to a recent survey, approximately 12% of people are planning to buy a home this year, which when compared to other averages, is low. The same survey concluded that the remaining 27.19% of typical potential buyers are holding back due to an inability to find a home in their price range. This, however, could change with the utilization of Artificial Intelligence.

ai

Scientists Utilizing Artificial Intelligence To Find New Hit Songs And Musicians

According to new research from scientists in California, a robot utilizing artificial intelligence (AI) could be the next step in identifying hit pop songs and artists in the music industry. The scientists said that by utilizing the technology, they’ve been able to identify hit songs with 97% accuracy.  

“By applying machine learning to neurophysiologic data, we could almost perfectly identify hit songs. That the neural activity of 33 people can predict if millions of others listened to new songs is quite amazing. Nothing close to this accuracy has ever been shown before,” says Paul Zak, a professor at Claremont Graduate University and senior author, in a media release

Embed from Getty Images

The AI itself uses a neural network, which is apparently so straightforward that it can also be utilized for streaming service efficiency, TV shows, and movies in general. 

The music industry today is dominated by streaming services. With billions of songs to choose from, it can become challenging for popular apps such as Spotify, Apple Music, Tidal, etc. to choose which ones their users will listen to, especially among newer artists. 

Professor Zak claims that his colleagues and himself believe that their method is twice as effective as previous models which only showed a 50% success rate. 

In the study itself, participants listened to a set of 24 songs while wearing a skull-cap brain scanner. Throughout the process, they were asked about their preferences while the scientists measured their neurophysiological responses. 

“The brain signals we’ve collected reflect activity of a brain network associated with mood and energy levels,” Zak stated.

Embed from Getty Images

Based on the responses, the team of scientists were able to use their technology to predict market outcomes for certain songs, including the number of streams a song may receive. This process is referred to as “neuroforecasting,” which essentially means using the brain activity of a select group of people to predict how a larger population will react.

According to reports from Study Finds, who reported on the study, “a statistical model identified potential chart hits 69 percent of the time, but this jumped to 97 percent when machine learning was applied to the data. The team found that even by analyzing neural responses to only the first minute of songs, they achieved a success rate of 82 percent.

“This means that streaming services can readily identify new songs that are likely to be hits for people’s playlists more efficiently, making the streaming services’ jobs easier and delighting listeners,” Zak explains.

“If in the future wearable neuroscience technologies, like the ones we used for this study, become commonplace, the right entertainment could be sent to audiences based on their neurophysiology. Instead of being offered hundreds of choices, they might be given just two or three, making it easier and faster for them to choose music that they will enjoy.

“Our key contribution is the methodology. It is likely that this approach can be used to predict hits for many other kinds of entertainment too, including movies and TV shows,” Zak stated.

Geoffrey Hinton, ‘The Godfather Of A.I.’, Leaves Google And Warns Of Future Dangers Of A.I.

In 2012, Geoffrey Hinton and two of his graduate students from the University of Toronto created technology that has become the foundation of Artificial Intelligence systems used by some of the biggest tech companies in the world. Now, Hinton has left his job at Google and is warning many about the risks of AI technology, stating that he now regrets his life’s work.