Deepfake is a term to describe a type of synthetic media, where an existing video is re-dubbed with another person’s likeness. Deepfakes normally utilize Artificial Intelligence to manipulate or create visual or audio content. Deepfakes are becoming more and more common, and can be completely harmless or very dangerous. You may have seen apps on social media, that allow users replace film characters faces with their own, or seen countless clips of Nicholas Cage dubbed into (what is beginning to seem) almost every movie ever created. The process of altering videos can be used to great effect, entertaining and artistic, however, they can also be used dangerously, spreading misinformation and harmful content.
Digital face editing, or CGI has been employed many times within the movie industry, films such as Rouge One: A Star Wars Story, transposed scenes of the late Carrie Fisher and Peter Cushing into the film. The application of such digital editing is not classed as a deepfake, and it is often manual, rather than utilizing machine-learning or AI – but the results are similar and often are mistakenly called deepfakes. In some online examples, AI generated deepfakes have actually improved the CGI in films.
On the term deepfake, The BBC explained ‘in 2017, a few posts emerged on Reddit, showing how it was possible to use Artificial Intelligence to seamlessly swap faces in videos. That technique was called deepfake. The ‘deep’ bit comes from Deep Learning, a branch of AI that uses something known as neural networks. In a nutshell, neural networks are a type of machine learning technique that bears some resemblance to how the human brain works.’
There are often ways to detect a deepfake, although their ability to deceive is becoming more and more absolute. Often, however they are made by amateurs and are not that convincing, albeit in many cases still entertaining. Deepfakes will often have visible artefacts around them, such as blurring or flickering – most easily detected when the face rapidly changes angle. Another common issue is that the eyes move independently of each other.
When deepfakes first emerged, they were largely found in pornography, as Wired explains: ‘the first widespread use of synthetic media – non-consensual deepfake pornography, which almost exclusively targets women – has proliferated wildly since it first emerged at the end of 2017. According to Amsterdam-based cybersecurity startup Sensity, which was founded in 2018 to combat deepfakes in the visual media, the number of deepfake pornography videos online is doubling every six months, and by next summer, there will be 180,000 available to view. By 2022, that number will have reached 720,000.’ The Guardian reported that the AI firm Deeptrace found 15,000 deepfake videos in September 2019, which had risen exponentially in 9 months, 96 percent of these were pornographic, 99 percent of which targeted female celebrities.
The widespread use of deepfakes in this manner is extremely damaging and problematic, and creating fake media such as this can ruin people’s lives or spread dangerous levels of misinformation. It seems that the misuse of deepfakes are on track to become even more dangerous, as Wired points out, ‘in 2021, deepfakes will develop further as weapons of fraud and political propaganda, and we are already seeing examples of this. In early 2020, the Belgian Branch of Extinction Rebellion used AI to generate a fictional speech by Belgian prime minister Sophie Wilmès. To achieve this, the group took an authentic video address made by Wilmès and used machine learning to manipulate the words she spoke to their own ends. The result: Wilmès is generated in a video making a fake speech in which she claims that Covid-19 is directly linked to the “exploitation and destruction by humans of our natural environment”.’
Due to this misuse, we are moving into a world where video content is becoming less and less trustworthy. Whilst deepfake technology could be utilized in many beneficial ways, in areas such as film, gaming, education and entertainment by creating realistic graphics, improving CGI, translating videos into different languages, creating lifelike portrayals of long-dead important figures and so forth; the exploitation of such technology to spread misinformation will inherently corrupt the validity of all information on the internet and beyond. To avoid this, many have underlined the importance of digital literacy to instigate a level of understanding and awareness around deepfakes alongside establishing suitable human and non-human fact-checking processes, and regulating the creation of deepfakes the technology– in order to create future that will not be damaged by deepfakes but augmented by it.