Posts

Billie Eilish And Stevie Wonder Among 200+ Artists Demanding Protection Against Predatory Use Of AI 

A group of more than 200 high-profile musicians and estates have signed an official open letter demanding protections against the predatory use of Artificial Intelligence (AI) which mimics human artists’ voices, songs, and overall likeness, according to new reports. 

beer

Scientists In Belgium Are Using AI To Make Their Beer Taste Better 

Researchers in Belgium are currently exploring how AI can be used to improve the taste of their beer, which is known for its high quality and long history.

ai

UK Artists Potentially Joining Lawsuit Against Midjourney Over Use Of Their Work To Train AI Software 

Midjourney is one of the many new image generators available to the public online that uses artificial intelligence to generate a given image, or series of images, based on what the user enters into the given prompts. 

AI technology has been on the rise within the past few years, and has officially entered the mainstream as of late. The ways in which AI gathers information to be used for its specific purpose, however, have been called out for being unethical, or straight up theft from various writers, artists, and creatives in general who are unknowingly having their works used to train these systems. 

With Midjourney specifically, there has recently been a list of around 16,000 artists whose work has been used to train Midjourney’s AI. Some of these artists include Bridget Riley, Damien Hirst, Rachel Whiteread, Tracey Emin, David Hockney, and Anish Kapoor, according to the Guardian

Embed from Getty Images

UK artist’s have now contacted lawyers in the US to discuss joining a class action lawsuit against Midjourney and other AI companies involved in similar practices. 

Tim Flach, the president of the Association of Photographers and a photographer himself who was included on the list of 16,000 stated the importance of collaboration when it comes to battling AI programs and companies. 

“What we need to do is come together. This public showing of this list of names is a great catalyst for artists to come together and challenge it. I personally would be up for doing that.”

The list of names was released in a 24-page document that was used within the class action lawsuit filed by 10 American artists in California. The lawsuit in particular is against Midjourney, Stability AI, Runway AI, and DevianArt. 

According to Matthew Butterick, one of the lawyers representing the artists, stated that since filing, they’ve received interest from artists all over the world to join the suit. The tech firms involved have until February 8th to respond to the claim, which states the following: 

“Though [the] defendants like to describe their AI image products in lofty terms, the reality is grubbier and nastier: AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists.”

Embed from Getty Images

The lawsuit also stated that with Midjourney specifically, users are allowed, and encouraged, to specify any artist’s personal style when entering their description for the image they want to generate using AI. 

“The impersonation of artists and their style is probably the thing that will stick, because if you take an artist’s style you’re effectively robbing them of their livelihood,”  Flach said. 

The Design and Artists Copyright Society (DACS) took a survey last week of 1,000 artists and agents regarding the lack of legal regulation over generative AI technologies. The survey showed that 89% of the respondents want the UK government to regulate generative AI, while 22% discovered that their own works have been used to train AI. 

“If we’d done our survey now [after the list had come out] we probably would have had a stronger response. A lot of people didn’t know whether their works had been used. There’s a transparency we didn’t have a couple of months ago,” said Reema Selhi, head of policy at DACS.

Selhi continued to discuss how ministers initially wanted to open up copyright laws to actually make it easier for companies to train AIs without needing permission in regards to the artists’ works they learn it from. 

“We’ve had such a great strength of feeling from people that this is completely copyright infringement. Permission hasn’t been sought. They haven’t given consent. They haven’t been remunerated. They haven’t been credited.”

DACS is actively pushing for some form of official licensing or royalty system to be put in place so artists have more control over where and how their works are produced, or at the very least receive some sort of compensation.

ai

40% Of Jobs Worldwide Could Be Affected By Artificial Intelligence, IMF Says 

According to the International Monetary Fund (IMF), around 40% of jobs globally could be affected by the rise in the use of artificial intelligence (AI). The IMF warned that these recent trends in AI could deepen the inequality that’s already present in the tech industry, and other industries where AI is being used. 

IMF chief Kristalina Georgieva published an official blog post on Sunday in which she called on government powers to establish effective “social safety nets and offer retraining programs” to counter the negative impacts of AI, according to CNN.

Embed from Getty Images

“In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” she wrote.

Georgieva published the post ahead of the annual World Economic Forum meeting in Switzerland, where the topic of AI is set to be a big topic of conversation. 

Sam Altman, the chief executive of ChatGPT-maker OpenAI, and Satya Nadella, the CEO of Microsoft, will also speak at the Forum later this week and be involved in a debate being called “Generative AI: Steam engine of the Fourth Industrial Revolution?”

“As AI continues to be adapted by more workers and businesses, it’s expected to both help and hurt the human workforce,” Georgieva said in her blog, according to CNN.

Embed from Getty Images

Georgieva also stated that the negative impacts of AI are expected to hit nations with advanced economies. 

She explained that in more developed economies, up to 60% of jobs could potentially be impacted by AI, but half of those jobs could benefit from the productivity benefits of AI. 

“For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring. In the most extreme cases, some of these jobs may disappear,” wrote Georgieva.

CNN reported that within emerging markets, places with sustained economic growth, 40% of jobs are expected to be impacted by AI. In lower income nations, places with developing economies, 26% of jobs are expected to be impacted by AI. 

“Many of these countries don’t have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality,” stated Georgieva.

ai

The New York Times Is Suing Microsoft And OpenAI Over Copyright Infringement 

OpenAI and Microsoft are being sued by the New York Times over copyright infringement, alleging that the two companies’ artificial intelligence technology has illegally copied millions of articles from the Times to train AI services like ChatGPT.

google

Google To Potentially Invest Hundreds Of Millions Into Character.AI Startup 

Google is currently in conversation to invest in Character.AI, an artificial intelligence chatbot platform startup. According to CTech News, Character.AI was created by Noam Shazeer and Daniel De Freitas, two former employees of Google Brain. 

Google is prepared to invest “hundreds of millions of dollars” into Character.AI as it continues to train chatbot models to talk to users, according to sources who spoke to Reuters. 

Embed from Getty Images

Character.AI and Google already have a standing relationship in which they use Google’s cloud services and Tensor Processing Units to train its chatbot models, so this investment would deepen that partnership. 

Character.AI allows users to log in and choose from a variety of celebrities, movie characters, creatures, etc. to chat with. Users can even create their own character chatbot to speak with. Subscription models cost $9.99 a month, but the platform is also free to use. 

According to data from Similarweb, reported by CalTech, “Character.AI’s chatbots, with various roles and tones to choose from, have appealed to users ages 18 to 24, who contributed about 60% of its website traffic. 

Embed from Getty Images

The demographic is helping the company position itself as the purveyor of more fun personal AI companions, compared to other AI chatbots from OpenAI’s ChatGPT and Google’s Bard.”

Within the first six months of launching, Character.AI saw about 100 million visits every month. 

Reuters wrote that “The startup is also in talks to raise equity funding from venture capital investors, which could value the company at over $5 billion.

In March, it raised $150 million in a funding round led by Andreessen Horowitz at $1 billion valuation.

Google has been investing in AI startups, including $2 billion for model maker Anthropic in the form of convertible notes, on top of its earlier equity investment.”

ai

President Biden Issues Executive Order for AI Oversight

On Monday, President Joe Biden signed an executive order covering a wide range of topics related to artificial intelligence, paving the way for new government regulations and funding.

The 111-page order covers multiple facets of AI and areas of concern or development, including civil rights, cybersecurity, discrimination, global competition, and a push for establishing federal AI jobs.

A senior White House official, who wished to remain anonymous, reportedly told NBC News that the potential uses of AI are so vast that effective regulations must cover a lot of ground. He also underscored the need for “significant bipartisan legislation.”

“AI policy is like running into a decathlon, and there’s 10 different events here, and we don’t have the luxury of just picking ‘we’re just going to do safety’ or ‘we’re just going to do equity’ or ‘we’re just going to do privacy.’ You have to do all of these things.”

Embed from Getty Images

The order expands on a July nonbinding agreement between seven of the most prominent U.S. technology companies developing AI. The agreement required the companies to hire outside experts to identify weaknesses in their systems. The government can legally require companies to disclose the results of those safety tests under the Defense Production Act.

The Department of Commerce will also be required to develop guidelines for properly “watermarking” AI content, such as “deepfake” videos and ChatGPT-generated essays.

In an interview with NBC News, the Stanford Institute for Human-Centered Artificial Intelligence co-director Fei-Fei Li stressed the importance of government funding for AI to solve society’s pressing issues.

“The public sector holds a unique opportunity in terms of data and interdisciplinary talent to cure cancer, cure rare diseases, to map out biodiversity at a global scale, to understand and predict wildfires, to find climate solutions, to supercharge our teachers. There’s so much the public sector can do, but all of this is right now starved because we are severely lacking in resources.”

Embed from Getty Images

Some of the other topics covered in the order are geared toward anticipating and mitigating real-world problems that may arise from the widespread implementation of AI.

For instance, it asks the Department of Labor to address the potential for AI to cause job losses; the Consumer Financial Protection Bureau and Department of Housing and Urban Development to address how AI may exacerbate discrimination in banking and housing sectors; and the Office of Management and Budget, and others, to determine how the government can use AI without jeopardizing privacy rights.

The AI Now Institute managing director, Sarah Myers West, praised President Biden for including ethical concerns in the executive order. The nonprofit focuses on the societal implications of artificial intelligence use.

“It’s great to see the White House set the tone on the issues that matter most to the public: labor, civil rights, protecting privacy, promoting competition. This underscores you can’t deal with the future risks of AI without adequately dealing with the present. The key to looking forward will be to ensure strong enforcement as companies attempt to set a self-regulatory tone: industry cannot be left to lead the conversation on how to adequately address the effects of AI on the broader public.”

ai

How AI Could Help Prevent Brain Injuries In Contact Sports 

Artificially intelligent computers are now being taught how to identify on-field head impacts, within the NFL and other contact sports, as a means of preventing and identifying brain injuries right away as opposed to waiting for the player to get checked out. 

NFL’s chief medical officer, Allen Sills, spoke with The Guardian about how the new technology is helping to reduce head injuries and major impacts, as well as advancing the equipment medical teams in professional sports leagues have access to. 

Embed from Getty Images

At the NFL match between the Baltimore Ravens and the Tennessee Titans in London, the technology was used to analyze the level and rate of head impacts for the players, and using that information to teach better techniques for the players as a preventative measure. 

The technology, if adopted, would be used within sensors in the football helmets, so it can analyze the force of each tackle and conclude how severe a given injury may be when it occurs. In Europe, rugby teams became the first to adopt smart mouthguard technology to flag major impacts in real time. 

If a player is hit, and the force exceeds a certain threshold, the player will automatically be taken off the field for head injury assessment by a doctor. Dr Eanna Falvey, World Rugby’s chief medical officer, calls it “a gamechanger  in potentially identifying many of the 18% of concussions that now come to light only after a match.”

Embed from Getty Images

The technology can also be utilized in sports like boxing to get the most accurate readings as to whether or not one of the athletes is potentially concussed.

According to Dr Ross Tucker, a science and research consultant for World Rugby: “we are still only scratching the surface when it comes to how smart mouthguards and other technologies could make sports safer.”

“Imagine in the future, we could work out that four impacts above 40G creates the same risk of an injury as one above 90G, or that three within 15 minutes at any magnitude increases risk the same way that one at 70G does. There are so many questions we can start asking,” Tucker says

“It’s one thing to assist to identify concussions, it’s another entirely to say it’s going to allow coaches and players to track exactly how many significant head impacts they have in a career – especially with all the focus on long-term health risks. If they can manage that load, particularly in training, that has performance and welfare benefits,” he says.

ai

Amazon Invests up to $4 Billion in OpenAI Rival Anthropic in Exchange for Minority Stake

On Monday, Amazon announced it will invest up to $4 billion into the artificial intelligence company Anthropic. In exchange, Amazon will gain partial ownership, and Anthropic will use the company’s cloud computing platform, Amazon Web Services (AWS), more widely.

The growing relationship between the two firms is an example of how some large tech companies with extensive cloud computing resources are using those assets to strengthen their position in the artificial intelligence industry.

According to a statement released by Amazon, Anthropic will use AWS as its primary cloud provider, using the cloud platform to do most of its AI model development and research into AI safety. Anthropic will also have access to Amazon’s suite of in-house AI chips.

“AWS will become Anthropic’s primary cloud provider for mission-critical workloads, including safety research and future foundation model development. Anthropic plans to run the majority of its workloads on AWS, further providing Anthropic with the advanced technology of the world’s leading cloud provider.”

In addition, Anthropic has committed to making its AI models available to AWS users long-term, providing them with early access to features, including the ability to customize Anthropic models for their own purposes.

Embed from Getty Images

“With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature.”

Amazon Web Services (AWS) customers already have access to Anthropic’s AI models through Amazon Bedrock, the tech giant’s storefront for AI goods. Bedrock not only supports Amazon’s own models but also those from third-party developers such as  Stability AI and AI21 Labs.

In a press release, the co-founder and CEO of Anthropic, Dario Amodei, said that his company is “excited to use AWS’s Trainium chips to develop future foundation models.”

“Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers. By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

Embed from Getty Images

Anthropic stated that Amazon’s minority stake would not alter the company’s corporate governance structure or its dedication to the ethical advancement of artificial intelligence.

“Our corporate governance structure remains unchanged, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy. As outlined in this policy, we will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems.”

Several cloud market leaders, like Microsoft and now Amazon, have made investments into artificial intelligence technology. OpenAI, the company that developed ChatGPT, received $1 billion from Microsoft in 2019. Microsoft recently also invested $10 billion in OpenAI and is striving to integrate OpenAI’s technology into consumer-facing Microsoft products such as Bing.

This deal is Amazon’s most recent push into the artificial intelligence space to compete with industry leaders like Microsoft and Alphabet’s Google.

AI

Christopher Nolan Hopes Oppenheimer Will Act As A Warning For Silicon Valley And The Power Of Technology 

After a screening of Oppenheimer at The Whitby Hotel, Christopher Nolan joined a panel of the authors from the book the movie is based on, American Prometheus. During the panel, Nolan discussed wanting technology moguls and Silicon Valley audiences to take the film’s messaging regarding not knowing the power of one’s creation to heart. 

Chuck Todd of Meet the Press, asked Nolan what he hoped Silicon Valley might learn from the film: 

Embed from Getty Images

“I think what I would want them to take away is the concept of accountability.”

“When you innovate through technology, you have to make sure there is accountability. The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does,” Nolan explained, according to The Verge.

“Applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons, and if we allow people to say that that’s a separate entity from the person who’s wielding, programming, putting AI into use, then we’re doomed. It has to be about accountability.”

Embed from Getty Images

“We have to hold people accountable for what they do with the tools that they have.”

A majority of tech companies that run the world currently embrace the work of algorithms to gain and hold onto audiences and users. 

“When I talk to the leading researchers in the field of AI they literally refer to this right now as their Oppenheimer moment,” Nolan stated

“They’re looking to his story to say what are the responsibilities for scientists developing new technologies that may have unintended consequences.”

When asked “Do you think Silicon Valley is thinking that right now?” Nolan replied:

“They say that they do, and that’s helpful. That at least it’s in the conversation. And I hope that thought process will continue. I’m not saying Oppenheimer’s story offers any easy answers to these questions. But at least it serves a cautionary tale.”