Posts

Amazon Alexa App and Dot

Regulating AI Technology

As we have propelled ourselves into the future, technology is developing at an outstanding rate. Many companies and countries have had to adjust laws and regulations in order to meet the demands of the ever-changing digital world. Anti-hacking laws and cyber security have seen many amendments and adjustments to cope with the un-probed areas of the digital landscape. Recently, calls for better regulations to control the developing area of Artificial Intelligence (AI) have been issued.

AI technology is an expansive area which in essence aims to mimic human intelligence with machine learning. The areas this technology can lend itself to varies from simple tasks to complex roles. We have already begun to see the uses of which in our everyday lives, many of us have Apple’s “Siri” in our pockets or Amazon’s “Alexa” in our homes. If not, the experiences of equivalent voice recognition technology in our phones, computers, TV’s, cars and more is not unknown.

The development of driver-less cars, virtual agents, management and learning systems all utilize AI technology. Its reach can be seen in that of agriculture, marketing, security, transport, manufacturing, climate change, healthcare and beyond. Its use and autonomy in all of these areas is rapidly developing and spreading into broad and specific niches.

Embed from Getty Images

It’s no wonder therefore, that there are calls for not only a more extensive look at AI regulation but a tailored approach. CEO of Alphabet, parent company of Google, recently called for such regulations. Where there are guidelines developing in both the US and EU he calls for a global standard. Writing in the Financial Times, he states that Google’s own AI regulations were outlined in 2018, the aims of which are to ensure principles such as “safety, privacy, fairness and accountability, alongside monitoring where AI should not be used, for example, where it could violate human rights.”

This call for regulation has not only been heard from Google, but other companies such as Twitter, who have recently confronted AI company Clearview, (a company used by homeland security and the FBI), to stop taking images from the site for facial recognition and thus violating Twitter’s privacy policies. Further, the European commission is considering a ban on facial recognition technology in order to catch up and address areas where the equipment could be abused and presumably prevent that. Where some countries such as China are embracing facial recognition and rolling it out rapidly, many campaigners have spoken out against the use of such technology as it infringes upon an individual’s privacy rights.

The use and development of AI technologies has exposed many challenges and considerations in ethics, safety, justice and beyond. Whilst technology companies and leaders develop Artificial Intelligence, they have the power to regulate and develop it into areas that they see fit. Therefore, without a government or a world code of conduct, how will these developers hold to a universal standard and fairly use these systems?

Embed from Getty Images

The answer to this seems already underway. In 2019, countries of the Organization for Economic Cooperation and Development (OECD) and beyond, totaling forty-two countries, formally agreed to uphold standards in AI technology development “to ensure AI systems are designed to be robust, safe, fair and trustworthy.” Countries such as the USA, UK and many in South America, Africa, Asia, Europe and the Middle East came together to agree on policies. The official OECD press release summarizes objectives such as:

1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Where innovation and technology are rapidly developing the question is how can we foresee all of the pitfalls and downsides that the advancement of such technology will bring? The call for regulation in itself may be something that needs constant revisiting and re-writing, alongside encouraging more and more governments to agree to universal standards. Without stalling progress and allowing all of the benefits of artificial technology to permeate and advance humanity, it seems a mammoth task to envisage and pre-meditate protection against all of the potential downsides and abuses.