As protests targeting racial inequality and injustice shake the world, many industries are quite rightly, being called out for systemic racism. After the horrifying killing of George Floyd, in yet another example of Police brutality, the US police force has been facing long-overdue pressure to reform. This has brought into question the use of facial recognition technology, especially when used in law enforcement. ‘Civil rights advocates raised concerns about potential racial bias in surveillance technology,’ the BBC reported. Problems with racial profiling have long been an issue within the police force, and this is not the first-time surveillance technology has been called into question, its use to identify potential criminals has been under scrutiny for racial profiling prior to the 2020 protests. However, recently, technology giants such as Amazon and IBM have pulled the use of their facial recognition software from the police.
IBM was the first of the two to announce the move, stating that the AI systems that has aided law enforcement practices needed testing for ‘racial bias.’ IBM chief executive Arvind Krishna, wrote a letter to Congress which stated that the ‘fight against racism is as urgent as ever’, and ‘”IBM firmly opposes and will not condone the uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms… We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies” the BBC reported. IBM apparently suggested a move to body cameras on police officers and data analytics instead.
Following this move, Amazon has suspended the use of their recognition software for one year also voicing their support for the Black Lives Matter movement on social media. In order to allow US lawmakers to enact regulation legislation on its employment. ‘”We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge” Amazon said in its statement on the matter.
Amazon’s facial recognition technology, much like IBM’s has been called into question previously due to its violation of human rights. Congresswoman, Alexandria Ocasio-Cortez wrote in a tweet commending IBM’s move: ‘…Facial recognition is a horrifying, inaccurate tool that fuels racial profiling + mass surveillance. It regularly falsely ID’s Black + Brown people as criminal. It shouldn’t be anywhere near law enforcement.’
The racial bias in facial recognition technology has long been proven, Amazon’s technology, Recognition, was also shown to harbor inaccurate readings. According to The Guardian, ‘An experiment run by the ACLU in 2018 showed Recognition incorrectly matched 28 members of Congress to photos of people arrested for a crime. It overwhelmingly misidentified Congress members who are not white. Facial recognition software, like many forms of artificial intelligence, has a long history of racial bias. The field of artificial intelligence, which is overwhelmingly white and male, is frequently criticized for its lack of diversity.’
Generally, Facial recognition technology has proven inaccurate when identifying faces of those who are non-white. The National Institute of Standards and Technology, in 2019 found that the algorithms were ‘were 10 to 100 times more likely to inaccurately identify a photograph of a black or East Asian face, compared with a white one,’ according to Scientific American.
FaTechnology has also frequently been criticized to be more likely to mis-identify a black person as a criminal, and therefore demonstrate racial bias itself. As technology, cannot be innately bias, this has been underlined as an issue with the human-written code.
Pointing towards inbuilt biases that have unconsciously been programmed in by humans. In a 2017 article from The Guardian entitled ‘How white engineers built racist code – and why it’s dangerous for black people’: ‘algorithms are usually written by white engineers who dominate the technology sector. These engineers build on pre-existing code libraries, typically written by other white engineers,’ highlighting yet another area of systemic racism, inequality and lack of diversity. As AI technology learns, its inability to correctly identify people of color, the problem exacerbates.
According to CBS News this issue is thankfully going beyond technology corporations, ‘Democrats in Congress are probing the FBI and other federal agencies to determine if the surveillance software has been deployed against protesters, while states including California and New York are considering legislation to ban police use of the technology.’ This is an issue that has long needed attention and in the aftermath of George Floyd’s murder by a US police officer, more companies and law enforcement in general will be forced to address it.