Design a site like this with WordPress.com
Get started

Learning about Deepfakes

As we get ready for the first-ever Deepfake Awareness Day, it is important to learn more about what Deepfakes are. Enabled by Artificial Intelligence, Deepfakes are video, images, or voice that appear to come from a real person. The media is synthetic, where the image or voice of the person being faked is artificially created using digital means. Considered a generally dangerous development in technology, a Deepfake can be a video of a politician reciting lines that were never said, the face of an actress or ex-spouse placed onto someone else in a sex scene, or the voice of a corporate officer authorizing a wire transfer of funds. The term itself, attributed to a Reddit forum user, is a portmanteau combining “deep learning” and “fake”, emphasizing both its origins in machine learning technology and the fundamental duplicity involved in its implementation.

Examples of Deepfakes are the best way to understand the technology. The actor Jordan Peele, for example, participated in a Deepfake in April 2018 featuring a video of President Barack Obama speaking in Peele’s voice. The mouth movements and facial expressions of the former president matched the actor’s performance, and the video was intended to be informational on what Deepfakes can do. Other politicians, in Argentina, Germany and India by contrast, have been presented in a malicious way, intended to do political harm. Fake pornography is particularly malicious. Also in 2018, celebrities such as Daisy Ridley of Star Wars fame, or Gal Gadot who portrays Wonder Woman, were placed into porn scenes using Deepfake technology such as an app called FakeApp. Creating fake pornography can ruin a person’s reputation before the “that’s a Deepfake” objection has a chance to be raised. Besides, if the fake is convincing enough, it can cause significant real world damage beyond digital media. The ability to imitate someone’s voice has already been blamed for corporate officers being tricked into transferring funds based on banking instructions given in Deepfake phone calls. The rapid rise of remote work during the COVID-19 pandemic has vastly increased the danger of this type of cybercrime. Lastly, convincing, lifelike faces of people who do not even exist can be generated by A.I., enabling sock monkey and other fake social media accounts to give the impression of large quantities of people using a service or expressing an opinion where there are few. Corporations and politicians can be pressured into responding to “public opinion” that doesn’t even exist.

A particularly interesting Deepfake was the famous Nixon “moon disaster” speech. Most people don’t recognize the speech because it never took place; the moon landing was successful and President Nixon never had to deliver a eulogy for fallen astronauts. In 2019, however, a group at both MIT and two private companies created a synthetic and compelling video showing a Deepfake of Nixon, in both appearance and voice, addressing the nation in 1969. Although the research work to create the convincing video took over six months to do, it is a certainty that creating such videos will become faster and more accessible in the future, with many consequences for people’s ability to separate reality from fiction.

There are, however, a few potential upsides to the technology worth mentioning. Deepfakes can be used to generate positive, consensual media such as corporate training or self-help videos. Instead of a text response to an query, like a search engine result page, a purely synthetic actor or a Deepfake of a real actor can offer a full motion answer to a question or a walk through of a solution to a problem. Most importantly, a perverse privacy benefit will emerge if and when Deepfakes become routine. Revenge porn, a technological scourge of modern relationships, could be easily and credibly dismissed by the victim with the “that’s a Deepfake” objection. In a roundabout way, privacy can be enhanced by people being able to claim, not without cause, that the image or voice recording of them was a Deepfake. One can envision having a service or device deliberately generating random, yet realistic, background conversations that can confound the creepy, privacy-busting aspects of smart speakers, for instance.

Deepfakes are a rapidly involving technology that is here to stay, bringing significant downsides and limited upside. Only by being knowledgeable about them can people minimize the harm and find beneficial use cases. The Zombies of Things books will incorporate numerous examples of both positive and negative Deepfakes a quarter century into the future to help people understand the implications of the technology better.

References:

Deepfake deception: the emerging threat of deepfake attacks
https://www.jdsupra.com/legalnews/deepfake-deception-the-emerging-threat-6130002/

We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now
https://www.vice.com/en/article/bjye8a/reddit-fake-porn-app-daisy-ridley

Deepfakes, explained
https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained

A Nixon Deepfake, a ‘Moon Disaster’ Speech and an Information Ecosystem at Risk
https://www.scientificamerican.com/article/a-nixon-deepfake-a-moon-disaster-speech-and-an-information-ecosystem-at-risk1/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: