Posts

snapchat

Snapchat Launches AI Chatbot Powered by ChatGPT

Snapchat is launching its own artificial intelligence chatbot powered by OpenAI’s viral ChatGPT. The feature, called My AI, will be available to Snapchat Plus subscribers starting this week.

In a blog post, Snapchat shared how My AI can help subscribers with various tasks and assist them in their day-to-day activities.

“My AI can recommend birthday gift ideas for your BFF, plan a hiking trip for a long weekend, suggest a recipe for dinner, or even write a haiku about cheese for your cheddar-obsessed pal.”

Snapchat cautioned, however, that the chatbot is “experimental” and may respond in unexpected ways.

“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies, and sorry in advance!”

An AI hallucination is a term for when an AI presents false facts as the truth. In other words, it may confidently output completely made-up answers, leading to misinformation. At times, the answers may even be nonsensical.

Embed from Getty Images

In an email to CNET, a Snapchat representative described how the company customized the latest version of OpenAI’s ChatGPT technology for its platform.

“My AI was trained to have a unique tone and personality that plays into Snapchat’s core values around friendship, learning, and fun. It has been trained to adhere to our trust and safety guidelines.”

The company’s community guidelines prohibit the chatbot from responding with explicit, inflammatory or violent content.

Snapchat will store all conversations between subscribers and their My AI to review and help improve product experience. Users can also submit direct feedback to Snapchat by pressing and holding any message. Snapchat advised users not to “share any secrets with My AI” and to not rely on it for advice.

Currently, the feature is only available to Snapchat Plus members. However, in an interview with the Verge, Snapchat founder and CEO Evan Spiegel said the goal is to make the feature available to all of Snapchat’s 750 million monthly users.

Embed from Getty Images

Snapchat is the latest in a string of companies to integrate artificial intelligence into their platforms. Google recently revealed its ChatGPT contender Bard. The chatbot infamously made a factual error during an ad demo, costing Google a $100 billion drop in market value.

The same week, Microsoft announced it would integrate ChatGPT into its search engine Bing.

Since Snapchat is a messaging service, Spiegel believes it is uniquely positioned to create a personable chatbot. Spiegel told the Verge, “The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day.”

Unlike other AI chatbot integrations on platforms, Snapchat’s My AI interface suggests the chatbot is intended to be more than just a productivity tool. The chatbot has an avatar, and its “user profile” resembles a regular Snapchat friend profile. Users can even change the chat’s wallpaper.

The current price for a subscription to Snapchat Plus is $3.99 per month.

Amazon Alexa App and Dot

Regulating AI Technology

As we have propelled ourselves into the future, technology is developing at an outstanding rate. Many companies and countries have had to adjust laws and regulations in order to meet the demands of the ever-changing digital world. Anti-hacking laws and cyber security have seen many amendments and adjustments to cope with the un-probed areas of the digital landscape. Recently, calls for better regulations to control the developing area of Artificial Intelligence (AI) have been issued.

AI technology is an expansive area which in essence aims to mimic human intelligence with machine learning. The areas this technology can lend itself to varies from simple tasks to complex roles. We have already begun to see the uses of which in our everyday lives, many of us have Apple’s “Siri” in our pockets or Amazon’s “Alexa” in our homes. If not, the experiences of equivalent voice recognition technology in our phones, computers, TV’s, cars and more is not unknown.

The development of driver-less cars, virtual agents, management and learning systems all utilize AI technology. Its reach can be seen in that of agriculture, marketing, security, transport, manufacturing, climate change, healthcare and beyond. Its use and autonomy in all of these areas is rapidly developing and spreading into broad and specific niches.

Embed from Getty Images

It’s no wonder therefore, that there are calls for not only a more extensive look at AI regulation but a tailored approach. CEO of Alphabet, parent company of Google, recently called for such regulations. Where there are guidelines developing in both the US and EU he calls for a global standard. Writing in the Financial Times, he states that Google’s own AI regulations were outlined in 2018, the aims of which are to ensure principles such as “safety, privacy, fairness and accountability, alongside monitoring where AI should not be used, for example, where it could violate human rights.”

This call for regulation has not only been heard from Google, but other companies such as Twitter, who have recently confronted AI company Clearview, (a company used by homeland security and the FBI), to stop taking images from the site for facial recognition and thus violating Twitter’s privacy policies. Further, the European commission is considering a ban on facial recognition technology in order to catch up and address areas where the equipment could be abused and presumably prevent that. Where some countries such as China are embracing facial recognition and rolling it out rapidly, many campaigners have spoken out against the use of such technology as it infringes upon an individual’s privacy rights.

The use and development of AI technologies has exposed many challenges and considerations in ethics, safety, justice and beyond. Whilst technology companies and leaders develop Artificial Intelligence, they have the power to regulate and develop it into areas that they see fit. Therefore, without a government or a world code of conduct, how will these developers hold to a universal standard and fairly use these systems?

Embed from Getty Images

The answer to this seems already underway. In 2019, countries of the Organization for Economic Cooperation and Development (OECD) and beyond, totaling forty-two countries, formally agreed to uphold standards in AI technology development “to ensure AI systems are designed to be robust, safe, fair and trustworthy.” Countries such as the USA, UK and many in South America, Africa, Asia, Europe and the Middle East came together to agree on policies. The official OECD press release summarizes objectives such as:

1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Where innovation and technology are rapidly developing the question is how can we foresee all of the pitfalls and downsides that the advancement of such technology will bring? The call for regulation in itself may be something that needs constant revisiting and re-writing, alongside encouraging more and more governments to agree to universal standards. Without stalling progress and allowing all of the benefits of artificial technology to permeate and advance humanity, it seems a mammoth task to envisage and pre-meditate protection against all of the potential downsides and abuses.