What's a Deepfake and Why Exactly Are They So Dangerous?

Deepfakes are videos that swap one person's face for another, so that people can be shown doing and saying things they didn't say or do. They're now easy to make with free software.

The potential to fool large numbers of people with deepfakes have led to some scary predictions about their dangers.

In 2018, The Guardian speculated deepfakes may not just "threaten our already vulnerable information ecosystem," but may go so far as to "undermine the possibility of a reliable, shared reality."

"It's not hard to imagine this technology's being used to smear politicians, create counterfeit revenge porn or frame people for crimes," The New York Times wrote in 2018. "Lawmakers have already begun to worry about how deepfakes could be used for political sabotage and propaganda."

It's not just media outlets sounding the alarms, either. In 2019, the House Intelligence Committee held hearings on deepfakes, partially in response to a doctored video of Speaker of the House Nancy Pelosi.

"We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things," Barack Obama appeared to say in a 2018 deepfake video released by Buzzfeed, in which the former president is actually voiced by Jordan Peele.

"This is a dangerous time. Moving forward, we need to be more vigilant with what we trust from the internet," Peele continues in split-screen, revealing the ongoing manipulation of Obama's image. "How we move forward in the age of information is going to be the difference between whether we survive or whether we become some kind of f**ked-up dystopia."

Deepfakes have only proliferated in the two years since, but while many have put words into the mouths of Obama, German chancellor Angela Merkel, President Donald Trump and other public figures, the consequences to public discourse haven't so far risen to the dire predictions made in 2018. But do deepfakes have the potential to bring about "dystopia," "sabotage" politicians and undermine "shared reality"?

While previous public research demonstrated video-morphing technology, including a 2016 demo from the Technical University of Munich, the creation of public deepfake tools, like the open-source FakeApp and subsequent Faceswap software, has resulted in an explosion of deepfake content online.

Rather than the laborious process of rigging and animating 3D models employed by special-effects artists, deepfakes are created using machine-learning techniques—like generative adversarial networks, which pits two neural network against each other to iterate refinements through trial-and-error—made possible by open-source tools like Google's TensorFlow.

Deepfakes have been used to swap one actor for another in a famous performance, put celebrity faces on pornographic model's bodies and augment Bill Hader's impersonations of Tom Cruise and Arnold Schwarzenegger. But while many questions have been raised regarding people's right to their own image, and amid ongoing efforts to create better tools for detecting deepfakes (likely to be an ongoing arms race), reality and politics have yet to be overturned by the technology.

One possible reason is that conventional methods are already capable of disrupting public and political discourse with false information. Fabricated news websites and social media have efficiently spread elaborate conspiracy theories and false information without any need for a deepfake video of the accused saying what they are falsely alleged to have said or done. Social media is already awash in propaganda from state actors and private companies.

An AFP journalist reviews a deepfake video in Washington D.C. on January 25, 2019. ALEXANDRA ROBINSON/AFP via Getty Images

Rather than an unprecedented escalation in information warfare, the deepfake may become just another tool in an already dangerous array of methods for manipulation. In July, the author of multiple editorials in the Jerusalem Post and the Times of Israel was revealed to be a fabricated mouthpiece for parties unknown, his image created by similar machine-learning methods used to create deepfakes.

According to Reuters, the fictional journalist Oliver Taylor was used to falsely smear a Palestinian rights activist and her husband in the press. Rather than used to put false words in the mouths of the powerful to spread chaos, this appears to be a case of deepfake technology being employed to puppeteer the press on behalf of a cloaked agenda.

The mere existence of deepfakes has also resulted in new ways to attack legitimate information, such as the claim spread by Mississippi Republican congressional candidate Winnie Heartstrong that video of the police killing of George Floyd was a fabricated deepfake.

While wide-scale manipulation may result from deepfakes, the real dangers to our public discourse are already manifesting in strange and variable scenarios. Seeing through false people contrived to parrot the beliefs of hidden actors will have to be guarded against, though, and may become a part of developing an individual media literacy.

For now, it's still possible to detect many deepfakes with careful observation. While not foolproof, the MIT Media Lab recommends several deepfake "artifacts" to look out for, including too-smooth cheeks or foreheads; weird shadows around eyes and eyebrows; unnatural facial hair; shifting moles; too much or too little blinking and off-sized or off-color lips.