“Fake news” is a bunch of lies dressed up as news. And “deepfake” is a sample of existing videos and audios put through a software to generate a fake picture or video that will publicly embarrass (in most cases) a highly respected VIP. The dust has not even settled around the intrusion of fake news in our society that we now have to deal with deepfakes. How have these phenomena come to exist? Most importantly, which one of these threats to democracy is more dangerous?

Very quickly, the phenomenon of fake news may seem new. But, in reality, it is not. As long as humans have been dweling on the surface of this planet, made-up stories have always been around. But, in recent years, search engines and social media have caused this problem to be a serious concern.

As for deepfakes, between the end of 2017 and january 2018, this face-swap technology has gone from a nonsense word (used for the first time by the Reddit author who posted the very first convincing face-swapped video) to a widely used synonym for videos portraying one person’s face digitally grafted onto another person’s body.

It has already been pointed out that this video trickery technique using generative modeling can produce surprisingly realistic results. Further tweaks made by a skilled video editor can make them seem even more real.

This is where the Department of Defense’s research organization (DARPA) comes in the picture. Very concerned about the security aspect of this new deceptive technique of fake news, the US government, through its department of defense has decided to gather experts around a project that will try to determine whether the increasingly real-looking fake video and audio generated by artificial intelligence might soon be impossible to distinguish from the real thing — even for another AI system.

Consequently, under a project funded by american taxpayers, the world’s leading digital forensics experts came together this summer for an Artificial Intelligence (AI) fakery contest. Their collective task was to compete and come up with the most convincing AI-generated fake video, imagery, and audio. Not only have they generated deepfakes, but they also developed tools that can catch these counterfeits automatically. These tools are called Media Forensics (MediFor).

Deepfake

Deepfake Tucker

In addition of the work done by the experts involved in the DARPA challenge, exploring tricks that automatically catch deepfakes (strange head movements, odd eye color, and so on), a team led by Siwei Lyu, a professor at the University of Albany, State University of New York, and one of his students realized that the faces made using deepfakes rarely, if ever, blink. And when they do blink, the eye-movement is unnatural. This is because deepfakes are trained on still images, which tend to show a person with his or her eyes open.

“We are working on exploiting these types of physiological signals that, for now at least, are difficult for deepfakes to mimic,” says Hany Farid, another leading digital forensics expert from Dartmouth University.

The arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital detectives. A key problem, says Farid, is that machine-learning systems can be trained to outmaneuver forensics tools.

Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.”

Facebook and Google have a role to play

It’s easy to see why the US Department of Defense is concerned — right now the President of the United States boasts about the nation’s nuclear arsenal over social media while the U.S. and Korea inch towards talks of disarmament. What would definitely not help anyone right now would be having a believable, fake video of President Trump or Supreme Leader Kim Jong Un saying they’re planning to launch missiles go viral.

But internet pranksters or malicious enemies of the state who are capable of making these videos are not the only concern. A quick scan through the libraries of Facebook and Google’s published AI research shows that both companies invested in learning how to develop algorithms that can process, analyze, and alter photos and videos of people. If DARPA wants to lessen this potential digital threat, maybe they should look into what the tech giants are doing.

With Will Knight, Senior Editor for MIT Technology review

Follow me

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.