Posts

Facebook Scientists Can Now Tell Where Deepfakes Come From 

Artificial intelligence workers at Facebook have developed a new software that can reveal when a picture or video post is a deepfake as well as where it came from. 

Deepfakes are defined as videos that have been digitally altered in some way using AI technology. Typically, these videos show very hyper-realistic celebrity faces, saying whatever the user making the post wants them to say. These videos have become increasingly realistic, and popular, making it extremely hard for humans to tell what’s real, and what’s not. 

Embed from Getty Images

The Facebook researchers claim their new AI software can establish if a piece of media is a deepfake or not based on a single image taken from said video. The software will also be able to identify the AI that was used to create the video, no matter how advanced the technique. 

Tal Hassner, an applied research lead at Facebook, said that it’s “possible to train AI software to look at the photo and tell you with a reasonable degree of accuracy what is the design of the AI model that generated that photo.”

Deepfakes in general are a major threat to internet safety, in fact, Facebook banned them back in January 2020 due to the amount of misinformation they were spreading. Individuals can easily create doctored videos of powerful politicians making wild claims about the US that other world leaders could potentially see and take seriously before it’s determined that the video is indeed fake. 

Hassner said that detecting deepfakes is a “cat and mouse game, they’re becoming easier to produce and harder to detect. One of the main applications of deepfakes so far has been in pornography where a person’s face is swapped onto someone else’s body, but they’ve also been used to make celebrities appear as though they’re doing or saying something they’re not.”

Embed from Getty Images

Nina Schick is a deepfake expert who’s worked closely with the White House and President Biden on this issue. She emphasized that while it’s amazing that we now have the technology to detect when these videos are fake, it’s just as important to find out how well they actually work in the real world and how well they’re able to track and stop individuals from continuing to make them. 

“It’s all well and good testing it on a set of training data in a controlled environment. But one of the big challenges seems to be that there are easy ways to fool detection models, like by compressing an image or a video.”

It’s still unclear how or even if Facebook will be using this technology to combat the amount of misinformation deepfakes work to spread on the platform, but Tassner explained that ideally the technology will be used among all in the future. 

“If someone wanted to abuse them (generative models) and conduct a coordinated attack by uploading things from different sources, we can actually spot that just by saying all of these came from the same mold we’ve never seen before but it has these specific properties, specific attributes,” he said.