When you first click play on this YouTube video of a speech by German Chancellor Angela Merkel, it looks normal. She's speaking at a gathering of the Christian Social Union in Munich. But then you look closer.
About 10 seconds in, her face changes. In fact, it's not her face at all — it's President Donald Trump's. Merkel, like other world leaders, including Barack Obama and Argentina's president Mauricio Macri, has had her video edited to create what has been coined a "deepfake."
The word is a combination of the words "fake" and "deep learning" — a type of machine-learning method.
A deepfake is an image, audio clip or video created using artificial intelligence software to become something that seems real — but isn't.
Digital film editing has been around since the 1980s but was mostly used only in big-budget movies. It was often very expensive and time-intensive. Adding CGI to movies to create a fake landscape or person could take months and a team of specialists.
Now, machine learning does this intensive work. With artificial intelligence, images and videos can be superimposed on an existing image frame-by-frame, with little input from the human editor — though it does require somewhat extensive source footage.
For a closer look at how this technology works, watch Michael Zollhoefer's "Deep Video Portraits." Zollheofer is a visiting assistant professor at Stanford University in the computer science department.
The software that makes this possible is open-source — meaning that it's free and available on the internet for anyone who wants it. Tools like Google's TensorFlow and Facebook's DensePose allow hobbyists and those without much technical skill to make deepfakes.
Almost anyone can also create deepfake audio clips using software like Lyrebirdor Adobe's forth-coming Project Voco. This means that, in addition to Donald Trump's head being superimposed on Angela Merkel, it could also be his voice.
While there is some promise to this type of technology, deepfakes still pose major concerns.
Like other new technologies, some of the first applications of deepfakes were in pornography. Most often the targets of these deepfakes were female celebrities like Olivia Wilde and Gal Gadot who had their faces superimposed on porn stars in videos in late 2017.
Now, those wishing to create revenge porn or embarassing videos of others can do so relatively easily through apps that can allow a user to easily create deepfakes with enough source footage. However, this isn't always legal.
PornHub, Reddit and Twitter all banned deepfake and AI-generated porn content, but it's difficult to manually monitor, and the content still exists and has moved to other corners of the internet, according to Vice News.
Vice News did some foundational reporting on deepfakes in late 2017, beginning a flood of reports from other news outlets across the world about the trend.
Perhaps more concerning than deepfakes' use in pornography, however, is its effect on trust. Trust is declining in the United States, according to the 2018 Edelman Trust Barometer, which monitors trust in institutions globally.
The United States had a deeper decline than any other country with a staggering 37-point drop in all institutions, including the media. Across the world, seven out of 10 respondents to the Edelman survey expressed concerns about false information being used as a weapon.
If a viewer cannot trust audio or video, it becomes more difficult to trust all media and erodes trust in institutions like the legal system, government and more.
However, a 2018 Pew Research study found that individuals who had more trust in the media were more likely to identify factual statements.
Though deepfake media is relatively new, it can be combatted with some critical thinking skills. Parents and educators can encourage media literacy skills by asking such questions as:
Who created the content?
How might others interpret what the content says?
What possible biases and opinions are shared in this content?
How is this video or image trying to get my attention?
What purpose does this content have?
Deepfakes can also be flagged by the same technology that makes them possible: artificial intelligence. According to tech news platform Wired, GIF-hosting company Gfycat can run deepfakes through its AI tool and flag a clip that may resemble someone but that does not render perfectly in each frame.
But technology will have a harder time keeping up with deepfakes as they become more sophisticated, making it increasingly critical for media consumers to be throughtfully critical of everything they encounter.